2026-04-10 00:00:09.722253 | Job console starting 2026-04-10 00:00:09.753938 | Updating git repos 2026-04-10 00:00:09.840909 | Cloning repos into workspace 2026-04-10 00:00:10.253081 | Restoring repo states 2026-04-10 00:00:10.275539 | Merging changes 2026-04-10 00:00:10.275558 | Checking out repos 2026-04-10 00:00:10.810329 | Preparing playbooks 2026-04-10 00:00:11.841276 | Running Ansible setup 2026-04-10 00:00:18.400138 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-04-10 00:00:21.232210 | 2026-04-10 00:00:21.232389 | PLAY [Base pre] 2026-04-10 00:00:21.274205 | 2026-04-10 00:00:21.274362 | TASK [Setup log path fact] 2026-04-10 00:00:21.294404 | orchestrator | ok 2026-04-10 00:00:21.348725 | 2026-04-10 00:00:21.348892 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-10 00:00:21.396872 | orchestrator | ok 2026-04-10 00:00:21.429559 | 2026-04-10 00:00:21.429758 | TASK [emit-job-header : Print job information] 2026-04-10 00:00:21.480454 | # Job Information 2026-04-10 00:00:21.480668 | Ansible Version: 2.16.14 2026-04-10 00:00:21.480717 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-04-10 00:00:21.480752 | Pipeline: periodic-midnight 2026-04-10 00:00:21.480775 | Executor: 521e9411259a 2026-04-10 00:00:21.480795 | Triggered by: https://github.com/osism/testbed 2026-04-10 00:00:21.480816 | Event ID: 656309ad79aa4e9d975816b3f7500521 2026-04-10 00:00:21.489076 | 2026-04-10 00:00:21.489195 | LOOP [emit-job-header : Print node information] 2026-04-10 00:00:21.607653 | orchestrator | ok: 2026-04-10 00:00:21.607945 | orchestrator | # Node Information 2026-04-10 00:00:21.607983 | orchestrator | Inventory Hostname: orchestrator 2026-04-10 00:00:21.608009 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-04-10 00:00:21.608030 | orchestrator | Username: zuul-testbed04 2026-04-10 00:00:21.608052 | orchestrator | Distro: Debian 12.13 2026-04-10 00:00:21.608076 | orchestrator | Provider: static-testbed 2026-04-10 00:00:21.608097 | orchestrator | Region: 2026-04-10 00:00:21.608119 | orchestrator | Label: testbed-orchestrator 2026-04-10 00:00:21.608139 | orchestrator | Product Name: OpenStack Nova 2026-04-10 00:00:21.608159 | orchestrator | Interface IP: 81.163.193.140 2026-04-10 00:00:21.636240 | 2026-04-10 00:00:21.636370 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-04-10 00:00:23.309681 | orchestrator -> localhost | changed 2026-04-10 00:00:23.317970 | 2026-04-10 00:00:23.318094 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-04-10 00:00:25.423293 | orchestrator -> localhost | changed 2026-04-10 00:00:25.470755 | 2026-04-10 00:00:25.470906 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-04-10 00:00:26.047459 | orchestrator -> localhost | ok 2026-04-10 00:00:26.053287 | 2026-04-10 00:00:26.053381 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-04-10 00:00:26.072297 | orchestrator | ok 2026-04-10 00:00:26.122664 | orchestrator | included: /var/lib/zuul/builds/3fbdc7eebc9a432fbfedb79498829f7e/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-04-10 00:00:26.152021 | 2026-04-10 00:00:26.152110 | TASK [add-build-sshkey : Create Temp SSH key] 2026-04-10 00:00:28.145303 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-04-10 00:00:28.145483 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/3fbdc7eebc9a432fbfedb79498829f7e/work/3fbdc7eebc9a432fbfedb79498829f7e_id_rsa 2026-04-10 00:00:28.145515 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/3fbdc7eebc9a432fbfedb79498829f7e/work/3fbdc7eebc9a432fbfedb79498829f7e_id_rsa.pub 2026-04-10 00:00:28.145538 | orchestrator -> localhost | The key fingerprint is: 2026-04-10 00:00:28.145562 | orchestrator -> localhost | SHA256:18sISwZsC375M0Opd8BS3M/tlYXP/8gBjKAqDAW6pS0 zuul-build-sshkey 2026-04-10 00:00:28.145580 | orchestrator -> localhost | The key's randomart image is: 2026-04-10 00:00:28.145608 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-04-10 00:00:28.145627 | orchestrator -> localhost | |. | 2026-04-10 00:00:28.145645 | orchestrator -> localhost | |.. . . . . | 2026-04-10 00:00:28.145662 | orchestrator -> localhost | |. o . + + . . .| 2026-04-10 00:00:28.145679 | orchestrator -> localhost | | * . o B o * . oo| 2026-04-10 00:00:28.145707 | orchestrator -> localhost | |E . . * S o * ..+| 2026-04-10 00:00:28.145731 | orchestrator -> localhost | | + o B = o + ..| 2026-04-10 00:00:28.145749 | orchestrator -> localhost | | o . . B o o o .| 2026-04-10 00:00:28.145766 | orchestrator -> localhost | | . . = . o.| 2026-04-10 00:00:28.145783 | orchestrator -> localhost | | o .| 2026-04-10 00:00:28.145800 | orchestrator -> localhost | +----[SHA256]-----+ 2026-04-10 00:00:28.145843 | orchestrator -> localhost | ok: Runtime: 0:00:00.437543 2026-04-10 00:00:28.152065 | 2026-04-10 00:00:28.152146 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-04-10 00:00:28.190012 | orchestrator | ok 2026-04-10 00:00:28.263861 | orchestrator | included: /var/lib/zuul/builds/3fbdc7eebc9a432fbfedb79498829f7e/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-04-10 00:00:28.311204 | 2026-04-10 00:00:28.311304 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-04-10 00:00:28.359196 | orchestrator | skipping: Conditional result was False 2026-04-10 00:00:28.365675 | 2026-04-10 00:00:28.365792 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-04-10 00:00:29.753139 | orchestrator | changed 2026-04-10 00:00:29.758167 | 2026-04-10 00:00:29.758251 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-04-10 00:00:30.063249 | orchestrator | ok 2026-04-10 00:00:30.068264 | 2026-04-10 00:00:30.068343 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-04-10 00:00:30.601212 | orchestrator | ok 2026-04-10 00:00:30.605981 | 2026-04-10 00:00:30.606057 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-04-10 00:00:31.097942 | orchestrator | ok 2026-04-10 00:00:31.105630 | 2026-04-10 00:00:31.105724 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-04-10 00:00:31.185354 | orchestrator | skipping: Conditional result was False 2026-04-10 00:00:31.191001 | 2026-04-10 00:00:31.191090 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-04-10 00:00:32.183366 | orchestrator -> localhost | changed 2026-04-10 00:00:32.207364 | 2026-04-10 00:00:32.207456 | TASK [add-build-sshkey : Add back temp key] 2026-04-10 00:00:33.236911 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/3fbdc7eebc9a432fbfedb79498829f7e/work/3fbdc7eebc9a432fbfedb79498829f7e_id_rsa (zuul-build-sshkey) 2026-04-10 00:00:33.237092 | orchestrator -> localhost | ok: Runtime: 0:00:00.047243 2026-04-10 00:00:33.242781 | 2026-04-10 00:00:33.252403 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-04-10 00:00:33.799247 | orchestrator | ok 2026-04-10 00:00:33.804081 | 2026-04-10 00:00:33.804166 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-04-10 00:00:33.842712 | orchestrator | skipping: Conditional result was False 2026-04-10 00:00:33.994684 | 2026-04-10 00:00:33.994824 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-04-10 00:00:34.673665 | orchestrator | ok 2026-04-10 00:00:34.705055 | 2026-04-10 00:00:34.705154 | TASK [validate-host : Define zuul_info_dir fact] 2026-04-10 00:00:34.773844 | orchestrator | ok 2026-04-10 00:00:34.780287 | 2026-04-10 00:00:34.780373 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-04-10 00:00:35.836612 | orchestrator -> localhost | ok 2026-04-10 00:00:35.852656 | 2026-04-10 00:00:35.852779 | TASK [validate-host : Collect information about the host] 2026-04-10 00:00:38.362746 | orchestrator | ok 2026-04-10 00:00:38.388662 | 2026-04-10 00:00:38.388795 | TASK [validate-host : Sanitize hostname] 2026-04-10 00:00:38.496238 | orchestrator | ok 2026-04-10 00:00:38.500780 | 2026-04-10 00:00:38.500861 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-04-10 00:00:40.119912 | orchestrator -> localhost | changed 2026-04-10 00:00:40.125263 | 2026-04-10 00:00:40.125359 | TASK [validate-host : Collect information about zuul worker] 2026-04-10 00:00:40.731398 | orchestrator | ok 2026-04-10 00:00:40.735624 | 2026-04-10 00:00:40.735712 | TASK [validate-host : Write out all zuul information for each host] 2026-04-10 00:00:41.762555 | orchestrator -> localhost | changed 2026-04-10 00:00:41.775865 | 2026-04-10 00:00:41.775957 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-04-10 00:00:42.078895 | orchestrator | ok 2026-04-10 00:00:42.084008 | 2026-04-10 00:00:42.084098 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-04-10 00:02:08.742275 | orchestrator | changed: 2026-04-10 00:02:08.743271 | orchestrator | .d..t...... src/ 2026-04-10 00:02:08.743341 | orchestrator | .d..t...... src/github.com/ 2026-04-10 00:02:08.743369 | orchestrator | .d..t...... src/github.com/osism/ 2026-04-10 00:02:08.743391 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-04-10 00:02:08.743413 | orchestrator | RedHat.yml 2026-04-10 00:02:08.763021 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-04-10 00:02:08.763134 | orchestrator | RedHat.yml 2026-04-10 00:02:08.763204 | orchestrator | = 1.53.0"... 2026-04-10 00:02:20.162693 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-04-10 00:02:20.302904 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-04-10 00:02:20.829519 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-04-10 00:02:20.931054 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-04-10 00:02:21.641705 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-04-10 00:02:21.912981 | orchestrator | - Installing hashicorp/local v2.8.0... 2026-04-10 00:02:22.554126 | orchestrator | - Installed hashicorp/local v2.8.0 (signed, key ID 0C0AF313E5FD9F80) 2026-04-10 00:02:22.554181 | orchestrator | 2026-04-10 00:02:22.554189 | orchestrator | Providers are signed by their developers. 2026-04-10 00:02:22.554196 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-04-10 00:02:22.554201 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-04-10 00:02:22.554210 | orchestrator | 2026-04-10 00:02:22.554215 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-04-10 00:02:22.554221 | orchestrator | selections it made above. Include this file in your version control repository 2026-04-10 00:02:22.554234 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-04-10 00:02:22.554239 | orchestrator | you run "tofu init" in the future. 2026-04-10 00:02:22.554244 | orchestrator | 2026-04-10 00:02:22.554249 | orchestrator | OpenTofu has been successfully initialized! 2026-04-10 00:02:22.554254 | orchestrator | 2026-04-10 00:02:22.554259 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-04-10 00:02:22.554264 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-04-10 00:02:22.554269 | orchestrator | should now work. 2026-04-10 00:02:22.554274 | orchestrator | 2026-04-10 00:02:22.554279 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-04-10 00:02:22.554284 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-04-10 00:02:22.554290 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-04-10 00:02:22.765823 | orchestrator | Created and switched to workspace "ci"! 2026-04-10 00:02:22.765861 | orchestrator | 2026-04-10 00:02:22.765867 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-04-10 00:02:22.765872 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-04-10 00:02:22.765877 | orchestrator | for this configuration. 2026-04-10 00:02:22.858077 | orchestrator | ci.auto.tfvars 2026-04-10 00:02:22.860562 | orchestrator | default_custom.tf 2026-04-10 00:02:24.053064 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-04-10 00:02:24.578468 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-04-10 00:02:24.857563 | orchestrator | 2026-04-10 00:02:24.857656 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-04-10 00:02:24.857668 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-04-10 00:02:24.857674 | orchestrator | + create 2026-04-10 00:02:24.857680 | orchestrator | <= read (data resources) 2026-04-10 00:02:24.857686 | orchestrator | 2026-04-10 00:02:24.857692 | orchestrator | OpenTofu will perform the following actions: 2026-04-10 00:02:24.857706 | orchestrator | 2026-04-10 00:02:24.857712 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-04-10 00:02:24.857717 | orchestrator | # (config refers to values not yet known) 2026-04-10 00:02:24.857723 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-04-10 00:02:24.857728 | orchestrator | + checksum = (known after apply) 2026-04-10 00:02:24.857734 | orchestrator | + created_at = (known after apply) 2026-04-10 00:02:24.857739 | orchestrator | + file = (known after apply) 2026-04-10 00:02:24.857744 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.857770 | orchestrator | + metadata = (known after apply) 2026-04-10 00:02:24.857776 | orchestrator | + min_disk_gb = (known after apply) 2026-04-10 00:02:24.857781 | orchestrator | + min_ram_mb = (known after apply) 2026-04-10 00:02:24.857787 | orchestrator | + most_recent = true 2026-04-10 00:02:24.857792 | orchestrator | + name = (known after apply) 2026-04-10 00:02:24.857797 | orchestrator | + protected = (known after apply) 2026-04-10 00:02:24.857803 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.857811 | orchestrator | + schema = (known after apply) 2026-04-10 00:02:24.857816 | orchestrator | + size_bytes = (known after apply) 2026-04-10 00:02:24.857821 | orchestrator | + tags = (known after apply) 2026-04-10 00:02:24.857826 | orchestrator | + updated_at = (known after apply) 2026-04-10 00:02:24.857832 | orchestrator | } 2026-04-10 00:02:24.857840 | orchestrator | 2026-04-10 00:02:24.857845 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-04-10 00:02:24.857851 | orchestrator | # (config refers to values not yet known) 2026-04-10 00:02:24.857856 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-04-10 00:02:24.857861 | orchestrator | + checksum = (known after apply) 2026-04-10 00:02:24.857866 | orchestrator | + created_at = (known after apply) 2026-04-10 00:02:24.857871 | orchestrator | + file = (known after apply) 2026-04-10 00:02:24.857876 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.857881 | orchestrator | + metadata = (known after apply) 2026-04-10 00:02:24.857887 | orchestrator | + min_disk_gb = (known after apply) 2026-04-10 00:02:24.857892 | orchestrator | + min_ram_mb = (known after apply) 2026-04-10 00:02:24.857897 | orchestrator | + most_recent = true 2026-04-10 00:02:24.857902 | orchestrator | + name = (known after apply) 2026-04-10 00:02:24.857907 | orchestrator | + protected = (known after apply) 2026-04-10 00:02:24.857912 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.857917 | orchestrator | + schema = (known after apply) 2026-04-10 00:02:24.857922 | orchestrator | + size_bytes = (known after apply) 2026-04-10 00:02:24.857927 | orchestrator | + tags = (known after apply) 2026-04-10 00:02:24.857932 | orchestrator | + updated_at = (known after apply) 2026-04-10 00:02:24.857937 | orchestrator | } 2026-04-10 00:02:24.857942 | orchestrator | 2026-04-10 00:02:24.857947 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-04-10 00:02:24.857952 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-04-10 00:02:24.857958 | orchestrator | + content = (known after apply) 2026-04-10 00:02:24.857963 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-10 00:02:24.857968 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-10 00:02:24.857973 | orchestrator | + content_md5 = (known after apply) 2026-04-10 00:02:24.857978 | orchestrator | + content_sha1 = (known after apply) 2026-04-10 00:02:24.857983 | orchestrator | + content_sha256 = (known after apply) 2026-04-10 00:02:24.857989 | orchestrator | + content_sha512 = (known after apply) 2026-04-10 00:02:24.857994 | orchestrator | + directory_permission = "0777" 2026-04-10 00:02:24.857999 | orchestrator | + file_permission = "0644" 2026-04-10 00:02:24.858004 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-04-10 00:02:24.858009 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.858033 | orchestrator | } 2026-04-10 00:02:24.858040 | orchestrator | 2026-04-10 00:02:24.858046 | orchestrator | # local_file.id_rsa_pub will be created 2026-04-10 00:02:24.858051 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-04-10 00:02:24.858056 | orchestrator | + content = (known after apply) 2026-04-10 00:02:24.858061 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-10 00:02:24.858066 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-10 00:02:24.858071 | orchestrator | + content_md5 = (known after apply) 2026-04-10 00:02:24.858076 | orchestrator | + content_sha1 = (known after apply) 2026-04-10 00:02:24.858081 | orchestrator | + content_sha256 = (known after apply) 2026-04-10 00:02:24.858127 | orchestrator | + content_sha512 = (known after apply) 2026-04-10 00:02:24.858133 | orchestrator | + directory_permission = "0777" 2026-04-10 00:02:24.858138 | orchestrator | + file_permission = "0644" 2026-04-10 00:02:24.858149 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-04-10 00:02:24.858154 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.858159 | orchestrator | } 2026-04-10 00:02:24.858164 | orchestrator | 2026-04-10 00:02:24.858176 | orchestrator | # local_file.inventory will be created 2026-04-10 00:02:24.858181 | orchestrator | + resource "local_file" "inventory" { 2026-04-10 00:02:24.858186 | orchestrator | + content = (known after apply) 2026-04-10 00:02:24.858191 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-10 00:02:24.858196 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-10 00:02:24.858202 | orchestrator | + content_md5 = (known after apply) 2026-04-10 00:02:24.858207 | orchestrator | + content_sha1 = (known after apply) 2026-04-10 00:02:24.858212 | orchestrator | + content_sha256 = (known after apply) 2026-04-10 00:02:24.858217 | orchestrator | + content_sha512 = (known after apply) 2026-04-10 00:02:24.858223 | orchestrator | + directory_permission = "0777" 2026-04-10 00:02:24.858228 | orchestrator | + file_permission = "0644" 2026-04-10 00:02:24.858233 | orchestrator | + filename = "inventory.ci" 2026-04-10 00:02:24.858238 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.858243 | orchestrator | } 2026-04-10 00:02:24.858251 | orchestrator | 2026-04-10 00:02:24.858256 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-04-10 00:02:24.858262 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-04-10 00:02:24.858267 | orchestrator | + content = (sensitive value) 2026-04-10 00:02:24.858272 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-10 00:02:24.858277 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-10 00:02:24.858282 | orchestrator | + content_md5 = (known after apply) 2026-04-10 00:02:24.858287 | orchestrator | + content_sha1 = (known after apply) 2026-04-10 00:02:24.858292 | orchestrator | + content_sha256 = (known after apply) 2026-04-10 00:02:24.858301 | orchestrator | + content_sha512 = (known after apply) 2026-04-10 00:02:24.858308 | orchestrator | + directory_permission = "0700" 2026-04-10 00:02:24.858317 | orchestrator | + file_permission = "0600" 2026-04-10 00:02:24.858326 | orchestrator | + filename = ".id_rsa.ci" 2026-04-10 00:02:24.858334 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.858342 | orchestrator | } 2026-04-10 00:02:24.858350 | orchestrator | 2026-04-10 00:02:24.858358 | orchestrator | # null_resource.node_semaphore will be created 2026-04-10 00:02:24.858366 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-04-10 00:02:24.858374 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.858383 | orchestrator | } 2026-04-10 00:02:24.858391 | orchestrator | 2026-04-10 00:02:24.858399 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-04-10 00:02:24.858408 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-04-10 00:02:24.858416 | orchestrator | + attachment = (known after apply) 2026-04-10 00:02:24.858425 | orchestrator | + availability_zone = "nova" 2026-04-10 00:02:24.858433 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.858438 | orchestrator | + image_id = (known after apply) 2026-04-10 00:02:24.858443 | orchestrator | + metadata = (known after apply) 2026-04-10 00:02:24.858448 | orchestrator | + name = "testbed-volume-manager-base" 2026-04-10 00:02:24.858453 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.858458 | orchestrator | + size = 80 2026-04-10 00:02:24.858463 | orchestrator | + volume_retype_policy = "never" 2026-04-10 00:02:24.858468 | orchestrator | + volume_type = "ssd" 2026-04-10 00:02:24.858473 | orchestrator | } 2026-04-10 00:02:24.858478 | orchestrator | 2026-04-10 00:02:24.858483 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-04-10 00:02:24.858488 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-10 00:02:24.858493 | orchestrator | + attachment = (known after apply) 2026-04-10 00:02:24.858498 | orchestrator | + availability_zone = "nova" 2026-04-10 00:02:24.858503 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.858513 | orchestrator | + image_id = (known after apply) 2026-04-10 00:02:24.858518 | orchestrator | + metadata = (known after apply) 2026-04-10 00:02:24.858523 | orchestrator | + name = "testbed-volume-0-node-base" 2026-04-10 00:02:24.858528 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.858533 | orchestrator | + size = 80 2026-04-10 00:02:24.858539 | orchestrator | + volume_retype_policy = "never" 2026-04-10 00:02:24.858544 | orchestrator | + volume_type = "ssd" 2026-04-10 00:02:24.858553 | orchestrator | } 2026-04-10 00:02:24.858564 | orchestrator | 2026-04-10 00:02:24.858573 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-04-10 00:02:24.858582 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-10 00:02:24.858591 | orchestrator | + attachment = (known after apply) 2026-04-10 00:02:24.858600 | orchestrator | + availability_zone = "nova" 2026-04-10 00:02:24.858609 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.858615 | orchestrator | + image_id = (known after apply) 2026-04-10 00:02:24.858620 | orchestrator | + metadata = (known after apply) 2026-04-10 00:02:24.858625 | orchestrator | + name = "testbed-volume-1-node-base" 2026-04-10 00:02:24.858630 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.858635 | orchestrator | + size = 80 2026-04-10 00:02:24.858640 | orchestrator | + volume_retype_policy = "never" 2026-04-10 00:02:24.858646 | orchestrator | + volume_type = "ssd" 2026-04-10 00:02:24.858651 | orchestrator | } 2026-04-10 00:02:24.858656 | orchestrator | 2026-04-10 00:02:24.858661 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-04-10 00:02:24.858666 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-10 00:02:24.858671 | orchestrator | + attachment = (known after apply) 2026-04-10 00:02:24.858676 | orchestrator | + availability_zone = "nova" 2026-04-10 00:02:24.858681 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.858686 | orchestrator | + image_id = (known after apply) 2026-04-10 00:02:24.858691 | orchestrator | + metadata = (known after apply) 2026-04-10 00:02:24.858696 | orchestrator | + name = "testbed-volume-2-node-base" 2026-04-10 00:02:24.858701 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.858706 | orchestrator | + size = 80 2026-04-10 00:02:24.858711 | orchestrator | + volume_retype_policy = "never" 2026-04-10 00:02:24.858716 | orchestrator | + volume_type = "ssd" 2026-04-10 00:02:24.858721 | orchestrator | } 2026-04-10 00:02:24.858726 | orchestrator | 2026-04-10 00:02:24.858731 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-04-10 00:02:24.858736 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-10 00:02:24.858741 | orchestrator | + attachment = (known after apply) 2026-04-10 00:02:24.858746 | orchestrator | + availability_zone = "nova" 2026-04-10 00:02:24.858751 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.858756 | orchestrator | + image_id = (known after apply) 2026-04-10 00:02:24.858761 | orchestrator | + metadata = (known after apply) 2026-04-10 00:02:24.858770 | orchestrator | + name = "testbed-volume-3-node-base" 2026-04-10 00:02:24.858775 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.858780 | orchestrator | + size = 80 2026-04-10 00:02:24.858785 | orchestrator | + volume_retype_policy = "never" 2026-04-10 00:02:24.858790 | orchestrator | + volume_type = "ssd" 2026-04-10 00:02:24.858795 | orchestrator | } 2026-04-10 00:02:24.858803 | orchestrator | 2026-04-10 00:02:24.858808 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-04-10 00:02:24.858814 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-10 00:02:24.858819 | orchestrator | + attachment = (known after apply) 2026-04-10 00:02:24.858824 | orchestrator | + availability_zone = "nova" 2026-04-10 00:02:24.858829 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.858838 | orchestrator | + image_id = (known after apply) 2026-04-10 00:02:24.858843 | orchestrator | + metadata = (known after apply) 2026-04-10 00:02:24.858848 | orchestrator | + name = "testbed-volume-4-node-base" 2026-04-10 00:02:24.858853 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.858858 | orchestrator | + size = 80 2026-04-10 00:02:24.858863 | orchestrator | + volume_retype_policy = "never" 2026-04-10 00:02:24.858869 | orchestrator | + volume_type = "ssd" 2026-04-10 00:02:24.858874 | orchestrator | } 2026-04-10 00:02:24.858879 | orchestrator | 2026-04-10 00:02:24.858884 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-04-10 00:02:24.858889 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-10 00:02:24.858894 | orchestrator | + attachment = (known after apply) 2026-04-10 00:02:24.858899 | orchestrator | + availability_zone = "nova" 2026-04-10 00:02:24.858904 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.858909 | orchestrator | + image_id = (known after apply) 2026-04-10 00:02:24.858914 | orchestrator | + metadata = (known after apply) 2026-04-10 00:02:24.858919 | orchestrator | + name = "testbed-volume-5-node-base" 2026-04-10 00:02:24.858924 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.858929 | orchestrator | + size = 80 2026-04-10 00:02:24.858934 | orchestrator | + volume_retype_policy = "never" 2026-04-10 00:02:24.858939 | orchestrator | + volume_type = "ssd" 2026-04-10 00:02:24.858945 | orchestrator | } 2026-04-10 00:02:24.858950 | orchestrator | 2026-04-10 00:02:24.858955 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-04-10 00:02:24.858960 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-10 00:02:24.858965 | orchestrator | + attachment = (known after apply) 2026-04-10 00:02:24.858970 | orchestrator | + availability_zone = "nova" 2026-04-10 00:02:24.858975 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.858980 | orchestrator | + metadata = (known after apply) 2026-04-10 00:02:24.858986 | orchestrator | + name = "testbed-volume-0-node-3" 2026-04-10 00:02:24.858991 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.858996 | orchestrator | + size = 20 2026-04-10 00:02:24.859001 | orchestrator | + volume_retype_policy = "never" 2026-04-10 00:02:24.859006 | orchestrator | + volume_type = "ssd" 2026-04-10 00:02:24.859011 | orchestrator | } 2026-04-10 00:02:24.859016 | orchestrator | 2026-04-10 00:02:24.859021 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-04-10 00:02:24.859026 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-10 00:02:24.859031 | orchestrator | + attachment = (known after apply) 2026-04-10 00:02:24.859036 | orchestrator | + availability_zone = "nova" 2026-04-10 00:02:24.859042 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.859047 | orchestrator | + metadata = (known after apply) 2026-04-10 00:02:24.859052 | orchestrator | + name = "testbed-volume-1-node-4" 2026-04-10 00:02:24.859057 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.859062 | orchestrator | + size = 20 2026-04-10 00:02:24.859067 | orchestrator | + volume_retype_policy = "never" 2026-04-10 00:02:24.859072 | orchestrator | + volume_type = "ssd" 2026-04-10 00:02:24.859077 | orchestrator | } 2026-04-10 00:02:24.859100 | orchestrator | 2026-04-10 00:02:24.859106 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-04-10 00:02:24.859111 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-10 00:02:24.859116 | orchestrator | + attachment = (known after apply) 2026-04-10 00:02:24.859121 | orchestrator | + availability_zone = "nova" 2026-04-10 00:02:24.859126 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.859132 | orchestrator | + metadata = (known after apply) 2026-04-10 00:02:24.859137 | orchestrator | + name = "testbed-volume-2-node-5" 2026-04-10 00:02:24.859142 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.859151 | orchestrator | + size = 20 2026-04-10 00:02:24.859156 | orchestrator | + volume_retype_policy = "never" 2026-04-10 00:02:24.859161 | orchestrator | + volume_type = "ssd" 2026-04-10 00:02:24.859166 | orchestrator | } 2026-04-10 00:02:24.859171 | orchestrator | 2026-04-10 00:02:24.859176 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-04-10 00:02:24.859181 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-10 00:02:24.859186 | orchestrator | + attachment = (known after apply) 2026-04-10 00:02:24.859191 | orchestrator | + availability_zone = "nova" 2026-04-10 00:02:24.859196 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.859201 | orchestrator | + metadata = (known after apply) 2026-04-10 00:02:24.859206 | orchestrator | + name = "testbed-volume-3-node-3" 2026-04-10 00:02:24.859212 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.859217 | orchestrator | + size = 20 2026-04-10 00:02:24.859222 | orchestrator | + volume_retype_policy = "never" 2026-04-10 00:02:24.859227 | orchestrator | + volume_type = "ssd" 2026-04-10 00:02:24.859232 | orchestrator | } 2026-04-10 00:02:24.859237 | orchestrator | 2026-04-10 00:02:24.859242 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-04-10 00:02:24.859247 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-10 00:02:24.859252 | orchestrator | + attachment = (known after apply) 2026-04-10 00:02:24.859257 | orchestrator | + availability_zone = "nova" 2026-04-10 00:02:24.859262 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.859268 | orchestrator | + metadata = (known after apply) 2026-04-10 00:02:24.859273 | orchestrator | + name = "testbed-volume-4-node-4" 2026-04-10 00:02:24.859278 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.859286 | orchestrator | + size = 20 2026-04-10 00:02:24.859291 | orchestrator | + volume_retype_policy = "never" 2026-04-10 00:02:24.859296 | orchestrator | + volume_type = "ssd" 2026-04-10 00:02:24.859301 | orchestrator | } 2026-04-10 00:02:24.859306 | orchestrator | 2026-04-10 00:02:24.859312 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-04-10 00:02:24.859317 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-10 00:02:24.859322 | orchestrator | + attachment = (known after apply) 2026-04-10 00:02:24.859327 | orchestrator | + availability_zone = "nova" 2026-04-10 00:02:24.859332 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.859337 | orchestrator | + metadata = (known after apply) 2026-04-10 00:02:24.859342 | orchestrator | + name = "testbed-volume-5-node-5" 2026-04-10 00:02:24.859347 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.859352 | orchestrator | + size = 20 2026-04-10 00:02:24.859357 | orchestrator | + volume_retype_policy = "never" 2026-04-10 00:02:24.859362 | orchestrator | + volume_type = "ssd" 2026-04-10 00:02:24.859367 | orchestrator | } 2026-04-10 00:02:24.859374 | orchestrator | 2026-04-10 00:02:24.859380 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-04-10 00:02:24.859385 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-10 00:02:24.859390 | orchestrator | + attachment = (known after apply) 2026-04-10 00:02:24.859395 | orchestrator | + availability_zone = "nova" 2026-04-10 00:02:24.859400 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.859405 | orchestrator | + metadata = (known after apply) 2026-04-10 00:02:24.859410 | orchestrator | + name = "testbed-volume-6-node-3" 2026-04-10 00:02:24.859415 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.859420 | orchestrator | + size = 20 2026-04-10 00:02:24.859425 | orchestrator | + volume_retype_policy = "never" 2026-04-10 00:02:24.859430 | orchestrator | + volume_type = "ssd" 2026-04-10 00:02:24.859435 | orchestrator | } 2026-04-10 00:02:24.859440 | orchestrator | 2026-04-10 00:02:24.859445 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-04-10 00:02:24.859450 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-10 00:02:24.859459 | orchestrator | + attachment = (known after apply) 2026-04-10 00:02:24.859464 | orchestrator | + availability_zone = "nova" 2026-04-10 00:02:24.859470 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.859475 | orchestrator | + metadata = (known after apply) 2026-04-10 00:02:24.859480 | orchestrator | + name = "testbed-volume-7-node-4" 2026-04-10 00:02:24.859485 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.859490 | orchestrator | + size = 20 2026-04-10 00:02:24.859495 | orchestrator | + volume_retype_policy = "never" 2026-04-10 00:02:24.859500 | orchestrator | + volume_type = "ssd" 2026-04-10 00:02:24.859505 | orchestrator | } 2026-04-10 00:02:24.859510 | orchestrator | 2026-04-10 00:02:24.859515 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-04-10 00:02:24.859520 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-10 00:02:24.859525 | orchestrator | + attachment = (known after apply) 2026-04-10 00:02:24.859530 | orchestrator | + availability_zone = "nova" 2026-04-10 00:02:24.859536 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.859541 | orchestrator | + metadata = (known after apply) 2026-04-10 00:02:24.859546 | orchestrator | + name = "testbed-volume-8-node-5" 2026-04-10 00:02:24.859551 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.859558 | orchestrator | + size = 20 2026-04-10 00:02:24.859566 | orchestrator | + volume_retype_policy = "never" 2026-04-10 00:02:24.859574 | orchestrator | + volume_type = "ssd" 2026-04-10 00:02:24.859583 | orchestrator | } 2026-04-10 00:02:24.859594 | orchestrator | 2026-04-10 00:02:24.859603 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-04-10 00:02:24.859611 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-04-10 00:02:24.859619 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-10 00:02:24.859625 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-10 00:02:24.859630 | orchestrator | + all_metadata = (known after apply) 2026-04-10 00:02:24.859635 | orchestrator | + all_tags = (known after apply) 2026-04-10 00:02:24.859640 | orchestrator | + availability_zone = "nova" 2026-04-10 00:02:24.859645 | orchestrator | + config_drive = true 2026-04-10 00:02:24.859650 | orchestrator | + created = (known after apply) 2026-04-10 00:02:24.859655 | orchestrator | + flavor_id = (known after apply) 2026-04-10 00:02:24.859660 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-04-10 00:02:24.859665 | orchestrator | + force_delete = false 2026-04-10 00:02:24.859670 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-10 00:02:24.859675 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.859680 | orchestrator | + image_id = (known after apply) 2026-04-10 00:02:24.859685 | orchestrator | + image_name = (known after apply) 2026-04-10 00:02:24.859690 | orchestrator | + key_pair = "testbed" 2026-04-10 00:02:24.859696 | orchestrator | + name = "testbed-manager" 2026-04-10 00:02:24.859700 | orchestrator | + power_state = "active" 2026-04-10 00:02:24.859705 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.859712 | orchestrator | + security_groups = (known after apply) 2026-04-10 00:02:24.859721 | orchestrator | + stop_before_destroy = false 2026-04-10 00:02:24.859729 | orchestrator | + updated = (known after apply) 2026-04-10 00:02:24.859737 | orchestrator | + user_data = (sensitive value) 2026-04-10 00:02:24.859745 | orchestrator | 2026-04-10 00:02:24.859753 | orchestrator | + block_device { 2026-04-10 00:02:24.859762 | orchestrator | + boot_index = 0 2026-04-10 00:02:24.859771 | orchestrator | + delete_on_termination = false 2026-04-10 00:02:24.859783 | orchestrator | + destination_type = "volume" 2026-04-10 00:02:24.859792 | orchestrator | + multiattach = false 2026-04-10 00:02:24.859800 | orchestrator | + source_type = "volume" 2026-04-10 00:02:24.859809 | orchestrator | + uuid = (known after apply) 2026-04-10 00:02:24.859823 | orchestrator | } 2026-04-10 00:02:24.859829 | orchestrator | 2026-04-10 00:02:24.859834 | orchestrator | + network { 2026-04-10 00:02:24.859839 | orchestrator | + access_network = false 2026-04-10 00:02:24.859844 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-10 00:02:24.859849 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-10 00:02:24.859854 | orchestrator | + mac = (known after apply) 2026-04-10 00:02:24.859859 | orchestrator | + name = (known after apply) 2026-04-10 00:02:24.859864 | orchestrator | + port = (known after apply) 2026-04-10 00:02:24.859869 | orchestrator | + uuid = (known after apply) 2026-04-10 00:02:24.859875 | orchestrator | } 2026-04-10 00:02:24.859880 | orchestrator | } 2026-04-10 00:02:24.859887 | orchestrator | 2026-04-10 00:02:24.859893 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-04-10 00:02:24.859898 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-10 00:02:24.859903 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-10 00:02:24.859908 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-10 00:02:24.859913 | orchestrator | + all_metadata = (known after apply) 2026-04-10 00:02:24.859918 | orchestrator | + all_tags = (known after apply) 2026-04-10 00:02:24.859923 | orchestrator | + availability_zone = "nova" 2026-04-10 00:02:24.859927 | orchestrator | + config_drive = true 2026-04-10 00:02:24.859933 | orchestrator | + created = (known after apply) 2026-04-10 00:02:24.859937 | orchestrator | + flavor_id = (known after apply) 2026-04-10 00:02:24.859942 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-10 00:02:24.859947 | orchestrator | + force_delete = false 2026-04-10 00:02:24.859952 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-10 00:02:24.859957 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.859963 | orchestrator | + image_id = (known after apply) 2026-04-10 00:02:24.859968 | orchestrator | + image_name = (known after apply) 2026-04-10 00:02:24.859973 | orchestrator | + key_pair = "testbed" 2026-04-10 00:02:24.859978 | orchestrator | + name = "testbed-node-0" 2026-04-10 00:02:24.859983 | orchestrator | + power_state = "active" 2026-04-10 00:02:24.859988 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.859993 | orchestrator | + security_groups = (known after apply) 2026-04-10 00:02:24.859998 | orchestrator | + stop_before_destroy = false 2026-04-10 00:02:24.860003 | orchestrator | + updated = (known after apply) 2026-04-10 00:02:24.860008 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-10 00:02:24.860013 | orchestrator | 2026-04-10 00:02:24.860018 | orchestrator | + block_device { 2026-04-10 00:02:24.860023 | orchestrator | + boot_index = 0 2026-04-10 00:02:24.860028 | orchestrator | + delete_on_termination = false 2026-04-10 00:02:24.860033 | orchestrator | + destination_type = "volume" 2026-04-10 00:02:24.860038 | orchestrator | + multiattach = false 2026-04-10 00:02:24.860043 | orchestrator | + source_type = "volume" 2026-04-10 00:02:24.860048 | orchestrator | + uuid = (known after apply) 2026-04-10 00:02:24.860053 | orchestrator | } 2026-04-10 00:02:24.860058 | orchestrator | 2026-04-10 00:02:24.860063 | orchestrator | + network { 2026-04-10 00:02:24.860068 | orchestrator | + access_network = false 2026-04-10 00:02:24.860073 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-10 00:02:24.860078 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-10 00:02:24.860098 | orchestrator | + mac = (known after apply) 2026-04-10 00:02:24.860104 | orchestrator | + name = (known after apply) 2026-04-10 00:02:24.860109 | orchestrator | + port = (known after apply) 2026-04-10 00:02:24.860114 | orchestrator | + uuid = (known after apply) 2026-04-10 00:02:24.860119 | orchestrator | } 2026-04-10 00:02:24.860124 | orchestrator | } 2026-04-10 00:02:24.860131 | orchestrator | 2026-04-10 00:02:24.860137 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-04-10 00:02:24.860142 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-10 00:02:24.860147 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-10 00:02:24.860162 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-10 00:02:24.860167 | orchestrator | + all_metadata = (known after apply) 2026-04-10 00:02:24.860172 | orchestrator | + all_tags = (known after apply) 2026-04-10 00:02:24.860177 | orchestrator | + availability_zone = "nova" 2026-04-10 00:02:24.860182 | orchestrator | + config_drive = true 2026-04-10 00:02:24.860187 | orchestrator | + created = (known after apply) 2026-04-10 00:02:24.860192 | orchestrator | + flavor_id = (known after apply) 2026-04-10 00:02:24.860197 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-10 00:02:24.860202 | orchestrator | + force_delete = false 2026-04-10 00:02:24.860207 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-10 00:02:24.860212 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.860217 | orchestrator | + image_id = (known after apply) 2026-04-10 00:02:24.860222 | orchestrator | + image_name = (known after apply) 2026-04-10 00:02:24.860227 | orchestrator | + key_pair = "testbed" 2026-04-10 00:02:24.860232 | orchestrator | + name = "testbed-node-1" 2026-04-10 00:02:24.860237 | orchestrator | + power_state = "active" 2026-04-10 00:02:24.860242 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.860248 | orchestrator | + security_groups = (known after apply) 2026-04-10 00:02:24.860252 | orchestrator | + stop_before_destroy = false 2026-04-10 00:02:24.860257 | orchestrator | + updated = (known after apply) 2026-04-10 00:02:24.860263 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-10 00:02:24.860268 | orchestrator | 2026-04-10 00:02:24.860272 | orchestrator | + block_device { 2026-04-10 00:02:24.860277 | orchestrator | + boot_index = 0 2026-04-10 00:02:24.860282 | orchestrator | + delete_on_termination = false 2026-04-10 00:02:24.860287 | orchestrator | + destination_type = "volume" 2026-04-10 00:02:24.860292 | orchestrator | + multiattach = false 2026-04-10 00:02:24.860298 | orchestrator | + source_type = "volume" 2026-04-10 00:02:24.860303 | orchestrator | + uuid = (known after apply) 2026-04-10 00:02:24.860308 | orchestrator | } 2026-04-10 00:02:24.860313 | orchestrator | 2026-04-10 00:02:24.860318 | orchestrator | + network { 2026-04-10 00:02:24.860323 | orchestrator | + access_network = false 2026-04-10 00:02:24.860328 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-10 00:02:24.860333 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-10 00:02:24.860338 | orchestrator | + mac = (known after apply) 2026-04-10 00:02:24.860343 | orchestrator | + name = (known after apply) 2026-04-10 00:02:24.860348 | orchestrator | + port = (known after apply) 2026-04-10 00:02:24.860353 | orchestrator | + uuid = (known after apply) 2026-04-10 00:02:24.860358 | orchestrator | } 2026-04-10 00:02:24.860363 | orchestrator | } 2026-04-10 00:02:24.865627 | orchestrator | 2026-04-10 00:02:24.865679 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-04-10 00:02:24.865690 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-10 00:02:24.865699 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-10 00:02:24.865708 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-10 00:02:24.865720 | orchestrator | + all_metadata = (known after apply) 2026-04-10 00:02:24.865729 | orchestrator | + all_tags = (known after apply) 2026-04-10 00:02:24.865747 | orchestrator | + availability_zone = "nova" 2026-04-10 00:02:24.865753 | orchestrator | + config_drive = true 2026-04-10 00:02:24.865759 | orchestrator | + created = (known after apply) 2026-04-10 00:02:24.865764 | orchestrator | + flavor_id = (known after apply) 2026-04-10 00:02:24.865770 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-10 00:02:24.865775 | orchestrator | + force_delete = false 2026-04-10 00:02:24.865780 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-10 00:02:24.865785 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.865790 | orchestrator | + image_id = (known after apply) 2026-04-10 00:02:24.865804 | orchestrator | + image_name = (known after apply) 2026-04-10 00:02:24.865809 | orchestrator | + key_pair = "testbed" 2026-04-10 00:02:24.865815 | orchestrator | + name = "testbed-node-2" 2026-04-10 00:02:24.865819 | orchestrator | + power_state = "active" 2026-04-10 00:02:24.865825 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.865830 | orchestrator | + security_groups = (known after apply) 2026-04-10 00:02:24.865835 | orchestrator | + stop_before_destroy = false 2026-04-10 00:02:24.865840 | orchestrator | + updated = (known after apply) 2026-04-10 00:02:24.865845 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-10 00:02:24.865851 | orchestrator | 2026-04-10 00:02:24.865856 | orchestrator | + block_device { 2026-04-10 00:02:24.865862 | orchestrator | + boot_index = 0 2026-04-10 00:02:24.865867 | orchestrator | + delete_on_termination = false 2026-04-10 00:02:24.865872 | orchestrator | + destination_type = "volume" 2026-04-10 00:02:24.865877 | orchestrator | + multiattach = false 2026-04-10 00:02:24.865882 | orchestrator | + source_type = "volume" 2026-04-10 00:02:24.865887 | orchestrator | + uuid = (known after apply) 2026-04-10 00:02:24.865892 | orchestrator | } 2026-04-10 00:02:24.865897 | orchestrator | 2026-04-10 00:02:24.865902 | orchestrator | + network { 2026-04-10 00:02:24.865908 | orchestrator | + access_network = false 2026-04-10 00:02:24.865913 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-10 00:02:24.865918 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-10 00:02:24.865923 | orchestrator | + mac = (known after apply) 2026-04-10 00:02:24.865928 | orchestrator | + name = (known after apply) 2026-04-10 00:02:24.865933 | orchestrator | + port = (known after apply) 2026-04-10 00:02:24.865938 | orchestrator | + uuid = (known after apply) 2026-04-10 00:02:24.865943 | orchestrator | } 2026-04-10 00:02:24.865948 | orchestrator | } 2026-04-10 00:02:24.865954 | orchestrator | 2026-04-10 00:02:24.865959 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-04-10 00:02:24.865964 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-10 00:02:24.865969 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-10 00:02:24.865974 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-10 00:02:24.865979 | orchestrator | + all_metadata = (known after apply) 2026-04-10 00:02:24.865984 | orchestrator | + all_tags = (known after apply) 2026-04-10 00:02:24.865989 | orchestrator | + availability_zone = "nova" 2026-04-10 00:02:24.865994 | orchestrator | + config_drive = true 2026-04-10 00:02:24.865999 | orchestrator | + created = (known after apply) 2026-04-10 00:02:24.866004 | orchestrator | + flavor_id = (known after apply) 2026-04-10 00:02:24.866009 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-10 00:02:24.866035 | orchestrator | + force_delete = false 2026-04-10 00:02:24.866044 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-10 00:02:24.866054 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.866060 | orchestrator | + image_id = (known after apply) 2026-04-10 00:02:24.866065 | orchestrator | + image_name = (known after apply) 2026-04-10 00:02:24.866071 | orchestrator | + key_pair = "testbed" 2026-04-10 00:02:24.866076 | orchestrator | + name = "testbed-node-3" 2026-04-10 00:02:24.866081 | orchestrator | + power_state = "active" 2026-04-10 00:02:24.866174 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.866182 | orchestrator | + security_groups = (known after apply) 2026-04-10 00:02:24.866188 | orchestrator | + stop_before_destroy = false 2026-04-10 00:02:24.866193 | orchestrator | + updated = (known after apply) 2026-04-10 00:02:24.866198 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-10 00:02:24.866203 | orchestrator | 2026-04-10 00:02:24.866208 | orchestrator | + block_device { 2026-04-10 00:02:24.866219 | orchestrator | + boot_index = 0 2026-04-10 00:02:24.866224 | orchestrator | + delete_on_termination = false 2026-04-10 00:02:24.866229 | orchestrator | + destination_type = "volume" 2026-04-10 00:02:24.866240 | orchestrator | + multiattach = false 2026-04-10 00:02:24.866245 | orchestrator | + source_type = "volume" 2026-04-10 00:02:24.866250 | orchestrator | + uuid = (known after apply) 2026-04-10 00:02:24.866255 | orchestrator | } 2026-04-10 00:02:24.866260 | orchestrator | 2026-04-10 00:02:24.866264 | orchestrator | + network { 2026-04-10 00:02:24.866269 | orchestrator | + access_network = false 2026-04-10 00:02:24.866274 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-10 00:02:24.866279 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-10 00:02:24.866284 | orchestrator | + mac = (known after apply) 2026-04-10 00:02:24.866288 | orchestrator | + name = (known after apply) 2026-04-10 00:02:24.866293 | orchestrator | + port = (known after apply) 2026-04-10 00:02:24.866298 | orchestrator | + uuid = (known after apply) 2026-04-10 00:02:24.866303 | orchestrator | } 2026-04-10 00:02:24.866307 | orchestrator | } 2026-04-10 00:02:24.866312 | orchestrator | 2026-04-10 00:02:24.866317 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-04-10 00:02:24.866322 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-10 00:02:24.866327 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-10 00:02:24.866332 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-10 00:02:24.866337 | orchestrator | + all_metadata = (known after apply) 2026-04-10 00:02:24.866342 | orchestrator | + all_tags = (known after apply) 2026-04-10 00:02:24.866346 | orchestrator | + availability_zone = "nova" 2026-04-10 00:02:24.866351 | orchestrator | + config_drive = true 2026-04-10 00:02:24.866369 | orchestrator | + created = (known after apply) 2026-04-10 00:02:24.866374 | orchestrator | + flavor_id = (known after apply) 2026-04-10 00:02:24.866379 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-10 00:02:24.866392 | orchestrator | + force_delete = false 2026-04-10 00:02:24.866397 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-10 00:02:24.866402 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.866407 | orchestrator | + image_id = (known after apply) 2026-04-10 00:02:24.866412 | orchestrator | + image_name = (known after apply) 2026-04-10 00:02:24.866416 | orchestrator | + key_pair = "testbed" 2026-04-10 00:02:24.866421 | orchestrator | + name = "testbed-node-4" 2026-04-10 00:02:24.866426 | orchestrator | + power_state = "active" 2026-04-10 00:02:24.866431 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.866435 | orchestrator | + security_groups = (known after apply) 2026-04-10 00:02:24.866440 | orchestrator | + stop_before_destroy = false 2026-04-10 00:02:24.866445 | orchestrator | + updated = (known after apply) 2026-04-10 00:02:24.866450 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-10 00:02:24.866455 | orchestrator | 2026-04-10 00:02:24.866460 | orchestrator | + block_device { 2026-04-10 00:02:24.866464 | orchestrator | + boot_index = 0 2026-04-10 00:02:24.866469 | orchestrator | + delete_on_termination = false 2026-04-10 00:02:24.866474 | orchestrator | + destination_type = "volume" 2026-04-10 00:02:24.866479 | orchestrator | + multiattach = false 2026-04-10 00:02:24.866483 | orchestrator | + source_type = "volume" 2026-04-10 00:02:24.866488 | orchestrator | + uuid = (known after apply) 2026-04-10 00:02:24.866498 | orchestrator | } 2026-04-10 00:02:24.866503 | orchestrator | 2026-04-10 00:02:24.866508 | orchestrator | + network { 2026-04-10 00:02:24.866513 | orchestrator | + access_network = false 2026-04-10 00:02:24.866518 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-10 00:02:24.866523 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-10 00:02:24.866527 | orchestrator | + mac = (known after apply) 2026-04-10 00:02:24.866532 | orchestrator | + name = (known after apply) 2026-04-10 00:02:24.866537 | orchestrator | + port = (known after apply) 2026-04-10 00:02:24.866542 | orchestrator | + uuid = (known after apply) 2026-04-10 00:02:24.866547 | orchestrator | } 2026-04-10 00:02:24.866552 | orchestrator | } 2026-04-10 00:02:24.866561 | orchestrator | 2026-04-10 00:02:24.866566 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-04-10 00:02:24.866570 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-10 00:02:24.866575 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-10 00:02:24.866580 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-10 00:02:24.866585 | orchestrator | + all_metadata = (known after apply) 2026-04-10 00:02:24.866589 | orchestrator | + all_tags = (known after apply) 2026-04-10 00:02:24.866594 | orchestrator | + availability_zone = "nova" 2026-04-10 00:02:24.866599 | orchestrator | + config_drive = true 2026-04-10 00:02:24.866604 | orchestrator | + created = (known after apply) 2026-04-10 00:02:24.866609 | orchestrator | + flavor_id = (known after apply) 2026-04-10 00:02:24.866614 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-10 00:02:24.866619 | orchestrator | + force_delete = false 2026-04-10 00:02:24.866626 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-10 00:02:24.866631 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.866636 | orchestrator | + image_id = (known after apply) 2026-04-10 00:02:24.866641 | orchestrator | + image_name = (known after apply) 2026-04-10 00:02:24.866646 | orchestrator | + key_pair = "testbed" 2026-04-10 00:02:24.866651 | orchestrator | + name = "testbed-node-5" 2026-04-10 00:02:24.866655 | orchestrator | + power_state = "active" 2026-04-10 00:02:24.866660 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.866665 | orchestrator | + security_groups = (known after apply) 2026-04-10 00:02:24.866670 | orchestrator | + stop_before_destroy = false 2026-04-10 00:02:24.866675 | orchestrator | + updated = (known after apply) 2026-04-10 00:02:24.866679 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-10 00:02:24.866684 | orchestrator | 2026-04-10 00:02:24.866689 | orchestrator | + block_device { 2026-04-10 00:02:24.866694 | orchestrator | + boot_index = 0 2026-04-10 00:02:24.866698 | orchestrator | + delete_on_termination = false 2026-04-10 00:02:24.866703 | orchestrator | + destination_type = "volume" 2026-04-10 00:02:24.866708 | orchestrator | + multiattach = false 2026-04-10 00:02:24.866713 | orchestrator | + source_type = "volume" 2026-04-10 00:02:24.866718 | orchestrator | + uuid = (known after apply) 2026-04-10 00:02:24.866722 | orchestrator | } 2026-04-10 00:02:24.866727 | orchestrator | 2026-04-10 00:02:24.866732 | orchestrator | + network { 2026-04-10 00:02:24.866737 | orchestrator | + access_network = false 2026-04-10 00:02:24.866742 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-10 00:02:24.866747 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-10 00:02:24.866751 | orchestrator | + mac = (known after apply) 2026-04-10 00:02:24.866756 | orchestrator | + name = (known after apply) 2026-04-10 00:02:24.866761 | orchestrator | + port = (known after apply) 2026-04-10 00:02:24.866766 | orchestrator | + uuid = (known after apply) 2026-04-10 00:02:24.866771 | orchestrator | } 2026-04-10 00:02:24.866776 | orchestrator | } 2026-04-10 00:02:24.866781 | orchestrator | 2026-04-10 00:02:24.866785 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-04-10 00:02:24.866790 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-04-10 00:02:24.866795 | orchestrator | + fingerprint = (known after apply) 2026-04-10 00:02:24.866800 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.866804 | orchestrator | + name = "testbed" 2026-04-10 00:02:24.866809 | orchestrator | + private_key = (sensitive value) 2026-04-10 00:02:24.866814 | orchestrator | + public_key = (known after apply) 2026-04-10 00:02:24.866819 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.866824 | orchestrator | + user_id = (known after apply) 2026-04-10 00:02:24.866828 | orchestrator | } 2026-04-10 00:02:24.866833 | orchestrator | 2026-04-10 00:02:24.866838 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-04-10 00:02:24.866843 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-10 00:02:24.866852 | orchestrator | + device = (known after apply) 2026-04-10 00:02:24.866856 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.866861 | orchestrator | + instance_id = (known after apply) 2026-04-10 00:02:24.866866 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.866871 | orchestrator | + volume_id = (known after apply) 2026-04-10 00:02:24.866876 | orchestrator | } 2026-04-10 00:02:24.866880 | orchestrator | 2026-04-10 00:02:24.866885 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-04-10 00:02:24.866895 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-10 00:02:24.866900 | orchestrator | + device = (known after apply) 2026-04-10 00:02:24.866905 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.866910 | orchestrator | + instance_id = (known after apply) 2026-04-10 00:02:24.866914 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.866951 | orchestrator | + volume_id = (known after apply) 2026-04-10 00:02:24.866956 | orchestrator | } 2026-04-10 00:02:24.866961 | orchestrator | 2026-04-10 00:02:24.866966 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-04-10 00:02:24.866971 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-10 00:02:24.866975 | orchestrator | + device = (known after apply) 2026-04-10 00:02:24.866980 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.866985 | orchestrator | + instance_id = (known after apply) 2026-04-10 00:02:24.866990 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.866994 | orchestrator | + volume_id = (known after apply) 2026-04-10 00:02:24.866999 | orchestrator | } 2026-04-10 00:02:24.867025 | orchestrator | 2026-04-10 00:02:24.867031 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-04-10 00:02:24.867036 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-10 00:02:24.867040 | orchestrator | + device = (known after apply) 2026-04-10 00:02:24.867045 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.867050 | orchestrator | + instance_id = (known after apply) 2026-04-10 00:02:24.867055 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.867059 | orchestrator | + volume_id = (known after apply) 2026-04-10 00:02:24.867064 | orchestrator | } 2026-04-10 00:02:24.867069 | orchestrator | 2026-04-10 00:02:24.867074 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-04-10 00:02:24.867078 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-10 00:02:24.867123 | orchestrator | + device = (known after apply) 2026-04-10 00:02:24.867132 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.867141 | orchestrator | + instance_id = (known after apply) 2026-04-10 00:02:24.867154 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.867163 | orchestrator | + volume_id = (known after apply) 2026-04-10 00:02:24.867170 | orchestrator | } 2026-04-10 00:02:24.867175 | orchestrator | 2026-04-10 00:02:24.867180 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-04-10 00:02:24.867205 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-10 00:02:24.867210 | orchestrator | + device = (known after apply) 2026-04-10 00:02:24.867215 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.867219 | orchestrator | + instance_id = (known after apply) 2026-04-10 00:02:24.867224 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.867229 | orchestrator | + volume_id = (known after apply) 2026-04-10 00:02:24.867234 | orchestrator | } 2026-04-10 00:02:24.867239 | orchestrator | 2026-04-10 00:02:24.867244 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-04-10 00:02:24.867248 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-10 00:02:24.867253 | orchestrator | + device = (known after apply) 2026-04-10 00:02:24.867258 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.867282 | orchestrator | + instance_id = (known after apply) 2026-04-10 00:02:24.867287 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.867298 | orchestrator | + volume_id = (known after apply) 2026-04-10 00:02:24.867303 | orchestrator | } 2026-04-10 00:02:24.867308 | orchestrator | 2026-04-10 00:02:24.867313 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-04-10 00:02:24.867318 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-10 00:02:24.867323 | orchestrator | + device = (known after apply) 2026-04-10 00:02:24.867328 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.867332 | orchestrator | + instance_id = (known after apply) 2026-04-10 00:02:24.867337 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.867365 | orchestrator | + volume_id = (known after apply) 2026-04-10 00:02:24.867391 | orchestrator | } 2026-04-10 00:02:24.867396 | orchestrator | 2026-04-10 00:02:24.867401 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-04-10 00:02:24.867406 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-10 00:02:24.867412 | orchestrator | + device = (known after apply) 2026-04-10 00:02:24.867417 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.867422 | orchestrator | + instance_id = (known after apply) 2026-04-10 00:02:24.867448 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.867453 | orchestrator | + volume_id = (known after apply) 2026-04-10 00:02:24.867458 | orchestrator | } 2026-04-10 00:02:24.867462 | orchestrator | 2026-04-10 00:02:24.867467 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-04-10 00:02:24.867473 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-04-10 00:02:24.867478 | orchestrator | + fixed_ip = (known after apply) 2026-04-10 00:02:24.867483 | orchestrator | + floating_ip = (known after apply) 2026-04-10 00:02:24.867488 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.867493 | orchestrator | + port_id = (known after apply) 2026-04-10 00:02:24.867497 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.867502 | orchestrator | } 2026-04-10 00:02:24.867525 | orchestrator | 2026-04-10 00:02:24.867531 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-04-10 00:02:24.867536 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-04-10 00:02:24.867541 | orchestrator | + address = (known after apply) 2026-04-10 00:02:24.867546 | orchestrator | + all_tags = (known after apply) 2026-04-10 00:02:24.867551 | orchestrator | + dns_domain = (known after apply) 2026-04-10 00:02:24.867556 | orchestrator | + dns_name = (known after apply) 2026-04-10 00:02:24.867560 | orchestrator | + fixed_ip = (known after apply) 2026-04-10 00:02:24.867565 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.867570 | orchestrator | + pool = "public" 2026-04-10 00:02:24.867575 | orchestrator | + port_id = (known after apply) 2026-04-10 00:02:24.867580 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.867603 | orchestrator | + subnet_id = (known after apply) 2026-04-10 00:02:24.867608 | orchestrator | + tenant_id = (known after apply) 2026-04-10 00:02:24.867613 | orchestrator | } 2026-04-10 00:02:24.867618 | orchestrator | 2026-04-10 00:02:24.867623 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-04-10 00:02:24.867628 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-04-10 00:02:24.867633 | orchestrator | + admin_state_up = (known after apply) 2026-04-10 00:02:24.867642 | orchestrator | + all_tags = (known after apply) 2026-04-10 00:02:24.867647 | orchestrator | + availability_zone_hints = [ 2026-04-10 00:02:24.867652 | orchestrator | + "nova", 2026-04-10 00:02:24.867657 | orchestrator | ] 2026-04-10 00:02:24.867662 | orchestrator | + dns_domain = (known after apply) 2026-04-10 00:02:24.867685 | orchestrator | + external = (known after apply) 2026-04-10 00:02:24.867690 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.867695 | orchestrator | + mtu = (known after apply) 2026-04-10 00:02:24.867700 | orchestrator | + name = "net-testbed-management" 2026-04-10 00:02:24.867704 | orchestrator | + port_security_enabled = (known after apply) 2026-04-10 00:02:24.867714 | orchestrator | + qos_policy_id = (known after apply) 2026-04-10 00:02:24.867719 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.867724 | orchestrator | + shared = (known after apply) 2026-04-10 00:02:24.867729 | orchestrator | + tenant_id = (known after apply) 2026-04-10 00:02:24.867733 | orchestrator | + transparent_vlan = (known after apply) 2026-04-10 00:02:24.867738 | orchestrator | 2026-04-10 00:02:24.867743 | orchestrator | + segments (known after apply) 2026-04-10 00:02:24.867767 | orchestrator | } 2026-04-10 00:02:24.867772 | orchestrator | 2026-04-10 00:02:24.867777 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-04-10 00:02:24.867782 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-04-10 00:02:24.867787 | orchestrator | + admin_state_up = (known after apply) 2026-04-10 00:02:24.867792 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-10 00:02:24.867797 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-10 00:02:24.867805 | orchestrator | + all_tags = (known after apply) 2026-04-10 00:02:24.867810 | orchestrator | + device_id = (known after apply) 2026-04-10 00:02:24.867814 | orchestrator | + device_owner = (known after apply) 2026-04-10 00:02:24.867819 | orchestrator | + dns_assignment = (known after apply) 2026-04-10 00:02:24.867843 | orchestrator | + dns_name = (known after apply) 2026-04-10 00:02:24.867848 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.867853 | orchestrator | + mac_address = (known after apply) 2026-04-10 00:02:24.867858 | orchestrator | + network_id = (known after apply) 2026-04-10 00:02:24.867863 | orchestrator | + port_security_enabled = (known after apply) 2026-04-10 00:02:24.867868 | orchestrator | + qos_policy_id = (known after apply) 2026-04-10 00:02:24.867872 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.867877 | orchestrator | + security_group_ids = (known after apply) 2026-04-10 00:02:24.867882 | orchestrator | + tenant_id = (known after apply) 2026-04-10 00:02:24.867887 | orchestrator | 2026-04-10 00:02:24.867892 | orchestrator | + allowed_address_pairs { 2026-04-10 00:02:24.867897 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-10 00:02:24.867902 | orchestrator | } 2026-04-10 00:02:24.867926 | orchestrator | 2026-04-10 00:02:24.867931 | orchestrator | + binding (known after apply) 2026-04-10 00:02:24.867936 | orchestrator | 2026-04-10 00:02:24.867941 | orchestrator | + fixed_ip { 2026-04-10 00:02:24.867946 | orchestrator | + ip_address = "192.168.16.5" 2026-04-10 00:02:24.867951 | orchestrator | + subnet_id = (known after apply) 2026-04-10 00:02:24.867956 | orchestrator | } 2026-04-10 00:02:24.867961 | orchestrator | } 2026-04-10 00:02:24.867966 | orchestrator | 2026-04-10 00:02:24.867971 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-04-10 00:02:24.867975 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-10 00:02:24.867980 | orchestrator | + admin_state_up = (known after apply) 2026-04-10 00:02:24.868005 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-10 00:02:24.868010 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-10 00:02:24.868015 | orchestrator | + all_tags = (known after apply) 2026-04-10 00:02:24.868020 | orchestrator | + device_id = (known after apply) 2026-04-10 00:02:24.868025 | orchestrator | + device_owner = (known after apply) 2026-04-10 00:02:24.868030 | orchestrator | + dns_assignment = (known after apply) 2026-04-10 00:02:24.868034 | orchestrator | + dns_name = (known after apply) 2026-04-10 00:02:24.868040 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.868044 | orchestrator | + mac_address = (known after apply) 2026-04-10 00:02:24.868049 | orchestrator | + network_id = (known after apply) 2026-04-10 00:02:24.868054 | orchestrator | + port_security_enabled = (known after apply) 2026-04-10 00:02:24.868059 | orchestrator | + qos_policy_id = (known after apply) 2026-04-10 00:02:24.868064 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.868074 | orchestrator | + security_group_ids = (known after apply) 2026-04-10 00:02:24.868079 | orchestrator | + tenant_id = (known after apply) 2026-04-10 00:02:24.868095 | orchestrator | 2026-04-10 00:02:24.868100 | orchestrator | + allowed_address_pairs { 2026-04-10 00:02:24.868105 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-10 00:02:24.868110 | orchestrator | } 2026-04-10 00:02:24.868115 | orchestrator | + allowed_address_pairs { 2026-04-10 00:02:24.868119 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-10 00:02:24.868124 | orchestrator | } 2026-04-10 00:02:24.868129 | orchestrator | + allowed_address_pairs { 2026-04-10 00:02:24.868134 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-10 00:02:24.868139 | orchestrator | } 2026-04-10 00:02:24.868144 | orchestrator | 2026-04-10 00:02:24.868149 | orchestrator | + binding (known after apply) 2026-04-10 00:02:24.868154 | orchestrator | 2026-04-10 00:02:24.868158 | orchestrator | + fixed_ip { 2026-04-10 00:02:24.868164 | orchestrator | + ip_address = "192.168.16.10" 2026-04-10 00:02:24.868168 | orchestrator | + subnet_id = (known after apply) 2026-04-10 00:02:24.868174 | orchestrator | } 2026-04-10 00:02:24.868178 | orchestrator | } 2026-04-10 00:02:24.868183 | orchestrator | 2026-04-10 00:02:24.868188 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-04-10 00:02:24.868193 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-10 00:02:24.868198 | orchestrator | + admin_state_up = (known after apply) 2026-04-10 00:02:24.868202 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-10 00:02:24.868207 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-10 00:02:24.868212 | orchestrator | + all_tags = (known after apply) 2026-04-10 00:02:24.868217 | orchestrator | + device_id = (known after apply) 2026-04-10 00:02:24.868222 | orchestrator | + device_owner = (known after apply) 2026-04-10 00:02:24.868227 | orchestrator | + dns_assignment = (known after apply) 2026-04-10 00:02:24.868231 | orchestrator | + dns_name = (known after apply) 2026-04-10 00:02:24.868236 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.868241 | orchestrator | + mac_address = (known after apply) 2026-04-10 00:02:24.868250 | orchestrator | + network_id = (known after apply) 2026-04-10 00:02:24.868256 | orchestrator | + port_security_enabled = (known after apply) 2026-04-10 00:02:24.868260 | orchestrator | + qos_policy_id = (known after apply) 2026-04-10 00:02:24.868265 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.868270 | orchestrator | + security_group_ids = (known after apply) 2026-04-10 00:02:24.868275 | orchestrator | + tenant_id = (known after apply) 2026-04-10 00:02:24.868279 | orchestrator | 2026-04-10 00:02:24.868284 | orchestrator | + allowed_address_pairs { 2026-04-10 00:02:24.868289 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-10 00:02:24.868294 | orchestrator | } 2026-04-10 00:02:24.868299 | orchestrator | + allowed_address_pairs { 2026-04-10 00:02:24.868304 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-10 00:02:24.868308 | orchestrator | } 2026-04-10 00:02:24.868313 | orchestrator | + allowed_address_pairs { 2026-04-10 00:02:24.868318 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-10 00:02:24.868323 | orchestrator | } 2026-04-10 00:02:24.868328 | orchestrator | 2026-04-10 00:02:24.868333 | orchestrator | + binding (known after apply) 2026-04-10 00:02:24.868337 | orchestrator | 2026-04-10 00:02:24.868342 | orchestrator | + fixed_ip { 2026-04-10 00:02:24.868347 | orchestrator | + ip_address = "192.168.16.11" 2026-04-10 00:02:24.868352 | orchestrator | + subnet_id = (known after apply) 2026-04-10 00:02:24.868357 | orchestrator | } 2026-04-10 00:02:24.868362 | orchestrator | } 2026-04-10 00:02:24.868366 | orchestrator | 2026-04-10 00:02:24.868371 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-04-10 00:02:24.868376 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-10 00:02:24.868381 | orchestrator | + admin_state_up = (known after apply) 2026-04-10 00:02:24.868386 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-10 00:02:24.868391 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-10 00:02:24.868395 | orchestrator | + all_tags = (known after apply) 2026-04-10 00:02:24.868407 | orchestrator | + device_id = (known after apply) 2026-04-10 00:02:24.868412 | orchestrator | + device_owner = (known after apply) 2026-04-10 00:02:24.868416 | orchestrator | + dns_assignment = (known after apply) 2026-04-10 00:02:24.868421 | orchestrator | + dns_name = (known after apply) 2026-04-10 00:02:24.868429 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.868433 | orchestrator | + mac_address = (known after apply) 2026-04-10 00:02:24.868438 | orchestrator | + network_id = (known after apply) 2026-04-10 00:02:24.868443 | orchestrator | + port_security_enabled = (known after apply) 2026-04-10 00:02:24.868448 | orchestrator | + qos_policy_id = (known after apply) 2026-04-10 00:02:24.868452 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.868457 | orchestrator | + security_group_ids = (known after apply) 2026-04-10 00:02:24.868462 | orchestrator | + tenant_id = (known after apply) 2026-04-10 00:02:24.868467 | orchestrator | 2026-04-10 00:02:24.868471 | orchestrator | + allowed_address_pairs { 2026-04-10 00:02:24.868476 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-10 00:02:24.868481 | orchestrator | } 2026-04-10 00:02:24.868486 | orchestrator | + allowed_address_pairs { 2026-04-10 00:02:24.868491 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-10 00:02:24.868495 | orchestrator | } 2026-04-10 00:02:24.868500 | orchestrator | + allowed_address_pairs { 2026-04-10 00:02:24.868505 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-10 00:02:24.868510 | orchestrator | } 2026-04-10 00:02:24.868515 | orchestrator | 2026-04-10 00:02:24.868519 | orchestrator | + binding (known after apply) 2026-04-10 00:02:24.868524 | orchestrator | 2026-04-10 00:02:24.868529 | orchestrator | + fixed_ip { 2026-04-10 00:02:24.868534 | orchestrator | + ip_address = "192.168.16.12" 2026-04-10 00:02:24.868539 | orchestrator | + subnet_id = (known after apply) 2026-04-10 00:02:24.868544 | orchestrator | } 2026-04-10 00:02:24.868548 | orchestrator | } 2026-04-10 00:02:24.868553 | orchestrator | 2026-04-10 00:02:24.868558 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-04-10 00:02:24.868563 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-10 00:02:24.868567 | orchestrator | + admin_state_up = (known after apply) 2026-04-10 00:02:24.868572 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-10 00:02:24.868577 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-10 00:02:24.868582 | orchestrator | + all_tags = (known after apply) 2026-04-10 00:02:24.868586 | orchestrator | + device_id = (known after apply) 2026-04-10 00:02:24.868591 | orchestrator | + device_owner = (known after apply) 2026-04-10 00:02:24.868596 | orchestrator | + dns_assignment = (known after apply) 2026-04-10 00:02:24.868601 | orchestrator | + dns_name = (known after apply) 2026-04-10 00:02:24.868606 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.868610 | orchestrator | + mac_address = (known after apply) 2026-04-10 00:02:24.868615 | orchestrator | + network_id = (known after apply) 2026-04-10 00:02:24.868620 | orchestrator | + port_security_enabled = (known after apply) 2026-04-10 00:02:24.868625 | orchestrator | + qos_policy_id = (known after apply) 2026-04-10 00:02:24.868629 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.868634 | orchestrator | + security_group_ids = (known after apply) 2026-04-10 00:02:24.868639 | orchestrator | + tenant_id = (known after apply) 2026-04-10 00:02:24.868644 | orchestrator | 2026-04-10 00:02:24.868648 | orchestrator | + allowed_address_pairs { 2026-04-10 00:02:24.868653 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-10 00:02:24.868658 | orchestrator | } 2026-04-10 00:02:24.868663 | orchestrator | + allowed_address_pairs { 2026-04-10 00:02:24.868668 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-10 00:02:24.868673 | orchestrator | } 2026-04-10 00:02:24.868678 | orchestrator | + allowed_address_pairs { 2026-04-10 00:02:24.868682 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-10 00:02:24.868687 | orchestrator | } 2026-04-10 00:02:24.868692 | orchestrator | 2026-04-10 00:02:24.868704 | orchestrator | + binding (known after apply) 2026-04-10 00:02:24.868709 | orchestrator | 2026-04-10 00:02:24.868714 | orchestrator | + fixed_ip { 2026-04-10 00:02:24.868719 | orchestrator | + ip_address = "192.168.16.13" 2026-04-10 00:02:24.868724 | orchestrator | + subnet_id = (known after apply) 2026-04-10 00:02:24.868729 | orchestrator | } 2026-04-10 00:02:24.868733 | orchestrator | } 2026-04-10 00:02:24.868738 | orchestrator | 2026-04-10 00:02:24.868743 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-04-10 00:02:24.868748 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-10 00:02:24.868753 | orchestrator | + admin_state_up = (known after apply) 2026-04-10 00:02:24.868758 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-10 00:02:24.868763 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-10 00:02:24.868767 | orchestrator | + all_tags = (known after apply) 2026-04-10 00:02:24.868772 | orchestrator | + device_id = (known after apply) 2026-04-10 00:02:24.868777 | orchestrator | + device_owner = (known after apply) 2026-04-10 00:02:24.868782 | orchestrator | + dns_assignment = (known after apply) 2026-04-10 00:02:24.868790 | orchestrator | + dns_name = (known after apply) 2026-04-10 00:02:24.868795 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.868800 | orchestrator | + mac_address = (known after apply) 2026-04-10 00:02:24.868805 | orchestrator | + network_id = (known after apply) 2026-04-10 00:02:24.868809 | orchestrator | + port_security_enabled = (known after apply) 2026-04-10 00:02:24.868814 | orchestrator | + qos_policy_id = (known after apply) 2026-04-10 00:02:24.868819 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.868824 | orchestrator | + security_group_ids = (known after apply) 2026-04-10 00:02:24.868829 | orchestrator | + tenant_id = (known after apply) 2026-04-10 00:02:24.868834 | orchestrator | 2026-04-10 00:02:24.868839 | orchestrator | + allowed_address_pairs { 2026-04-10 00:02:24.868844 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-10 00:02:24.868849 | orchestrator | } 2026-04-10 00:02:24.868854 | orchestrator | + allowed_address_pairs { 2026-04-10 00:02:24.868859 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-10 00:02:24.868864 | orchestrator | } 2026-04-10 00:02:24.868869 | orchestrator | + allowed_address_pairs { 2026-04-10 00:02:24.868873 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-10 00:02:24.868878 | orchestrator | } 2026-04-10 00:02:24.868883 | orchestrator | 2026-04-10 00:02:24.868888 | orchestrator | + binding (known after apply) 2026-04-10 00:02:24.868893 | orchestrator | 2026-04-10 00:02:24.868897 | orchestrator | + fixed_ip { 2026-04-10 00:02:24.868902 | orchestrator | + ip_address = "192.168.16.14" 2026-04-10 00:02:24.868907 | orchestrator | + subnet_id = (known after apply) 2026-04-10 00:02:24.868912 | orchestrator | } 2026-04-10 00:02:24.868917 | orchestrator | } 2026-04-10 00:02:24.868922 | orchestrator | 2026-04-10 00:02:24.868926 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-04-10 00:02:24.868931 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-10 00:02:24.868936 | orchestrator | + admin_state_up = (known after apply) 2026-04-10 00:02:24.868941 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-10 00:02:24.868946 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-10 00:02:24.868950 | orchestrator | + all_tags = (known after apply) 2026-04-10 00:02:24.868955 | orchestrator | + device_id = (known after apply) 2026-04-10 00:02:24.868960 | orchestrator | + device_owner = (known after apply) 2026-04-10 00:02:24.868965 | orchestrator | + dns_assignment = (known after apply) 2026-04-10 00:02:24.868969 | orchestrator | + dns_name = (known after apply) 2026-04-10 00:02:24.868974 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.868979 | orchestrator | + mac_address = (known after apply) 2026-04-10 00:02:24.868984 | orchestrator | + network_id = (known after apply) 2026-04-10 00:02:24.868988 | orchestrator | + port_security_enabled = (known after apply) 2026-04-10 00:02:24.868993 | orchestrator | + qos_policy_id = (known after apply) 2026-04-10 00:02:24.869003 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.869007 | orchestrator | + security_group_ids = (known after apply) 2026-04-10 00:02:24.869012 | orchestrator | + tenant_id = (known after apply) 2026-04-10 00:02:24.869017 | orchestrator | 2026-04-10 00:02:24.869022 | orchestrator | + allowed_address_pairs { 2026-04-10 00:02:24.869027 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-10 00:02:24.869032 | orchestrator | } 2026-04-10 00:02:24.869036 | orchestrator | + allowed_address_pairs { 2026-04-10 00:02:24.869041 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-10 00:02:24.869046 | orchestrator | } 2026-04-10 00:02:24.869051 | orchestrator | + allowed_address_pairs { 2026-04-10 00:02:24.869056 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-10 00:02:24.869061 | orchestrator | } 2026-04-10 00:02:24.869066 | orchestrator | 2026-04-10 00:02:24.869073 | orchestrator | + binding (known after apply) 2026-04-10 00:02:24.869078 | orchestrator | 2026-04-10 00:02:24.869098 | orchestrator | + fixed_ip { 2026-04-10 00:02:24.869103 | orchestrator | + ip_address = "192.168.16.15" 2026-04-10 00:02:24.869108 | orchestrator | + subnet_id = (known after apply) 2026-04-10 00:02:24.869113 | orchestrator | } 2026-04-10 00:02:24.869118 | orchestrator | } 2026-04-10 00:02:24.869122 | orchestrator | 2026-04-10 00:02:24.869127 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-04-10 00:02:24.869132 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-04-10 00:02:24.869137 | orchestrator | + force_destroy = false 2026-04-10 00:02:24.869142 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.869147 | orchestrator | + port_id = (known after apply) 2026-04-10 00:02:24.869152 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.869156 | orchestrator | + router_id = (known after apply) 2026-04-10 00:02:24.869161 | orchestrator | + subnet_id = (known after apply) 2026-04-10 00:02:24.869166 | orchestrator | } 2026-04-10 00:02:24.869171 | orchestrator | 2026-04-10 00:02:24.869175 | orchestrator | # openstack_networking_router_v2.router will be created 2026-04-10 00:02:24.869180 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-04-10 00:02:24.869185 | orchestrator | + admin_state_up = (known after apply) 2026-04-10 00:02:24.869190 | orchestrator | + all_tags = (known after apply) 2026-04-10 00:02:24.869194 | orchestrator | + availability_zone_hints = [ 2026-04-10 00:02:24.869199 | orchestrator | + "nova", 2026-04-10 00:02:24.869204 | orchestrator | ] 2026-04-10 00:02:24.869209 | orchestrator | + distributed = (known after apply) 2026-04-10 00:02:24.869213 | orchestrator | + enable_snat = (known after apply) 2026-04-10 00:02:24.869218 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-04-10 00:02:24.869223 | orchestrator | + external_qos_policy_id = (known after apply) 2026-04-10 00:02:24.869228 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.869232 | orchestrator | + name = "testbed" 2026-04-10 00:02:24.869237 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.869242 | orchestrator | + tenant_id = (known after apply) 2026-04-10 00:02:24.869247 | orchestrator | 2026-04-10 00:02:24.869252 | orchestrator | + external_fixed_ip (known after apply) 2026-04-10 00:02:24.869257 | orchestrator | } 2026-04-10 00:02:24.869262 | orchestrator | 2026-04-10 00:02:24.869267 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-04-10 00:02:24.869272 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-04-10 00:02:24.869277 | orchestrator | + description = "ssh" 2026-04-10 00:02:24.869281 | orchestrator | + direction = "ingress" 2026-04-10 00:02:24.869286 | orchestrator | + ethertype = "IPv4" 2026-04-10 00:02:24.869291 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.869296 | orchestrator | + port_range_max = 22 2026-04-10 00:02:24.869301 | orchestrator | + port_range_min = 22 2026-04-10 00:02:24.869305 | orchestrator | + protocol = "tcp" 2026-04-10 00:02:24.869310 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.869319 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-10 00:02:24.869324 | orchestrator | + remote_group_id = (known after apply) 2026-04-10 00:02:24.869332 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-10 00:02:24.869337 | orchestrator | + security_group_id = (known after apply) 2026-04-10 00:02:24.869342 | orchestrator | + tenant_id = (known after apply) 2026-04-10 00:02:24.869347 | orchestrator | } 2026-04-10 00:02:24.869352 | orchestrator | 2026-04-10 00:02:24.869357 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-04-10 00:02:24.869361 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-04-10 00:02:24.869366 | orchestrator | + description = "wireguard" 2026-04-10 00:02:24.869371 | orchestrator | + direction = "ingress" 2026-04-10 00:02:24.869376 | orchestrator | + ethertype = "IPv4" 2026-04-10 00:02:24.869381 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.869385 | orchestrator | + port_range_max = 51820 2026-04-10 00:02:24.869390 | orchestrator | + port_range_min = 51820 2026-04-10 00:02:24.869395 | orchestrator | + protocol = "udp" 2026-04-10 00:02:24.869400 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.869405 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-10 00:02:24.869410 | orchestrator | + remote_group_id = (known after apply) 2026-04-10 00:02:24.869415 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-10 00:02:24.869419 | orchestrator | + security_group_id = (known after apply) 2026-04-10 00:02:24.869424 | orchestrator | + tenant_id = (known after apply) 2026-04-10 00:02:24.869429 | orchestrator | } 2026-04-10 00:02:24.869434 | orchestrator | 2026-04-10 00:02:24.869439 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-04-10 00:02:24.869443 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-04-10 00:02:24.869448 | orchestrator | + direction = "ingress" 2026-04-10 00:02:24.869453 | orchestrator | + ethertype = "IPv4" 2026-04-10 00:02:24.869458 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.869463 | orchestrator | + protocol = "tcp" 2026-04-10 00:02:24.869468 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.869473 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-10 00:02:24.869477 | orchestrator | + remote_group_id = (known after apply) 2026-04-10 00:02:24.869482 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-10 00:02:24.869487 | orchestrator | + security_group_id = (known after apply) 2026-04-10 00:02:24.869492 | orchestrator | + tenant_id = (known after apply) 2026-04-10 00:02:24.869497 | orchestrator | } 2026-04-10 00:02:24.869501 | orchestrator | 2026-04-10 00:02:24.869506 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-04-10 00:02:24.869511 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-04-10 00:02:24.869516 | orchestrator | + direction = "ingress" 2026-04-10 00:02:24.869521 | orchestrator | + ethertype = "IPv4" 2026-04-10 00:02:24.869526 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.869531 | orchestrator | + protocol = "udp" 2026-04-10 00:02:24.869536 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.869540 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-10 00:02:24.869545 | orchestrator | + remote_group_id = (known after apply) 2026-04-10 00:02:24.869550 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-10 00:02:24.869555 | orchestrator | + security_group_id = (known after apply) 2026-04-10 00:02:24.869559 | orchestrator | + tenant_id = (known after apply) 2026-04-10 00:02:24.869565 | orchestrator | } 2026-04-10 00:02:24.869569 | orchestrator | 2026-04-10 00:02:24.869574 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-04-10 00:02:24.869583 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-04-10 00:02:24.869588 | orchestrator | + direction = "ingress" 2026-04-10 00:02:24.869593 | orchestrator | + ethertype = "IPv4" 2026-04-10 00:02:24.869597 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.869602 | orchestrator | + protocol = "icmp" 2026-04-10 00:02:24.869607 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.869612 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-10 00:02:24.869617 | orchestrator | + remote_group_id = (known after apply) 2026-04-10 00:02:24.869621 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-10 00:02:24.869626 | orchestrator | + security_group_id = (known after apply) 2026-04-10 00:02:24.869631 | orchestrator | + tenant_id = (known after apply) 2026-04-10 00:02:24.869636 | orchestrator | } 2026-04-10 00:02:24.869641 | orchestrator | 2026-04-10 00:02:24.869646 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-04-10 00:02:24.869650 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-04-10 00:02:24.869655 | orchestrator | + direction = "ingress" 2026-04-10 00:02:24.869660 | orchestrator | + ethertype = "IPv4" 2026-04-10 00:02:24.869665 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.869670 | orchestrator | + protocol = "tcp" 2026-04-10 00:02:24.869675 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.869679 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-10 00:02:24.869687 | orchestrator | + remote_group_id = (known after apply) 2026-04-10 00:02:24.869692 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-10 00:02:24.869697 | orchestrator | + security_group_id = (known after apply) 2026-04-10 00:02:24.869702 | orchestrator | + tenant_id = (known after apply) 2026-04-10 00:02:24.869706 | orchestrator | } 2026-04-10 00:02:24.869711 | orchestrator | 2026-04-10 00:02:24.869716 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-04-10 00:02:24.869721 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-04-10 00:02:24.869726 | orchestrator | + direction = "ingress" 2026-04-10 00:02:24.869731 | orchestrator | + ethertype = "IPv4" 2026-04-10 00:02:24.869736 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.869741 | orchestrator | + protocol = "udp" 2026-04-10 00:02:24.869746 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.869754 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-10 00:02:24.869759 | orchestrator | + remote_group_id = (known after apply) 2026-04-10 00:02:24.869764 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-10 00:02:24.869768 | orchestrator | + security_group_id = (known after apply) 2026-04-10 00:02:24.869773 | orchestrator | + tenant_id = (known after apply) 2026-04-10 00:02:24.869778 | orchestrator | } 2026-04-10 00:02:24.869783 | orchestrator | 2026-04-10 00:02:24.869787 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-04-10 00:02:24.869792 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-04-10 00:02:24.869797 | orchestrator | + direction = "ingress" 2026-04-10 00:02:24.869804 | orchestrator | + ethertype = "IPv4" 2026-04-10 00:02:24.869809 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.869814 | orchestrator | + protocol = "icmp" 2026-04-10 00:02:24.869819 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.869824 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-10 00:02:24.869829 | orchestrator | + remote_group_id = (known after apply) 2026-04-10 00:02:24.869834 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-10 00:02:24.869838 | orchestrator | + security_group_id = (known after apply) 2026-04-10 00:02:24.869844 | orchestrator | + tenant_id = (known after apply) 2026-04-10 00:02:24.869852 | orchestrator | } 2026-04-10 00:02:24.869857 | orchestrator | 2026-04-10 00:02:24.869862 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-04-10 00:02:24.869867 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-04-10 00:02:24.869872 | orchestrator | + description = "vrrp" 2026-04-10 00:02:24.869877 | orchestrator | + direction = "ingress" 2026-04-10 00:02:24.869881 | orchestrator | + ethertype = "IPv4" 2026-04-10 00:02:24.869886 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.869898 | orchestrator | + protocol = "112" 2026-04-10 00:02:24.869903 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.869908 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-10 00:02:24.869913 | orchestrator | + remote_group_id = (known after apply) 2026-04-10 00:02:24.869917 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-10 00:02:24.869922 | orchestrator | + security_group_id = (known after apply) 2026-04-10 00:02:24.869927 | orchestrator | + tenant_id = (known after apply) 2026-04-10 00:02:24.869932 | orchestrator | } 2026-04-10 00:02:24.869937 | orchestrator | 2026-04-10 00:02:24.869941 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-04-10 00:02:24.869946 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-04-10 00:02:24.869951 | orchestrator | + all_tags = (known after apply) 2026-04-10 00:02:24.869956 | orchestrator | + description = "management security group" 2026-04-10 00:02:24.869961 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.869966 | orchestrator | + name = "testbed-management" 2026-04-10 00:02:24.869971 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.869975 | orchestrator | + stateful = (known after apply) 2026-04-10 00:02:24.869980 | orchestrator | + tenant_id = (known after apply) 2026-04-10 00:02:24.869985 | orchestrator | } 2026-04-10 00:02:24.869990 | orchestrator | 2026-04-10 00:02:24.869995 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-04-10 00:02:24.870000 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-04-10 00:02:24.870005 | orchestrator | + all_tags = (known after apply) 2026-04-10 00:02:24.870009 | orchestrator | + description = "node security group" 2026-04-10 00:02:24.870031 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.870036 | orchestrator | + name = "testbed-node" 2026-04-10 00:02:24.870041 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.870045 | orchestrator | + stateful = (known after apply) 2026-04-10 00:02:24.870050 | orchestrator | + tenant_id = (known after apply) 2026-04-10 00:02:24.870055 | orchestrator | } 2026-04-10 00:02:24.870060 | orchestrator | 2026-04-10 00:02:24.870065 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-04-10 00:02:24.870070 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-04-10 00:02:24.870075 | orchestrator | + all_tags = (known after apply) 2026-04-10 00:02:24.870079 | orchestrator | + cidr = "192.168.16.0/20" 2026-04-10 00:02:24.870114 | orchestrator | + dns_nameservers = [ 2026-04-10 00:02:24.870120 | orchestrator | + "8.8.8.8", 2026-04-10 00:02:24.870125 | orchestrator | + "9.9.9.9", 2026-04-10 00:02:24.870130 | orchestrator | ] 2026-04-10 00:02:24.870135 | orchestrator | + enable_dhcp = true 2026-04-10 00:02:24.870140 | orchestrator | + gateway_ip = (known after apply) 2026-04-10 00:02:24.870145 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.870150 | orchestrator | + ip_version = 4 2026-04-10 00:02:24.870155 | orchestrator | + ipv6_address_mode = (known after apply) 2026-04-10 00:02:24.870160 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-04-10 00:02:24.870165 | orchestrator | + name = "subnet-testbed-management" 2026-04-10 00:02:24.870170 | orchestrator | + network_id = (known after apply) 2026-04-10 00:02:24.870175 | orchestrator | + no_gateway = false 2026-04-10 00:02:24.870179 | orchestrator | + region = (known after apply) 2026-04-10 00:02:24.870184 | orchestrator | + service_types = (known after apply) 2026-04-10 00:02:24.870193 | orchestrator | + tenant_id = (known after apply) 2026-04-10 00:02:24.870198 | orchestrator | 2026-04-10 00:02:24.870203 | orchestrator | + allocation_pool { 2026-04-10 00:02:24.870208 | orchestrator | + end = "192.168.31.250" 2026-04-10 00:02:24.870213 | orchestrator | + start = "192.168.31.200" 2026-04-10 00:02:24.870218 | orchestrator | } 2026-04-10 00:02:24.870223 | orchestrator | } 2026-04-10 00:02:24.870228 | orchestrator | 2026-04-10 00:02:24.870233 | orchestrator | # terraform_data.image will be created 2026-04-10 00:02:24.870238 | orchestrator | + resource "terraform_data" "image" { 2026-04-10 00:02:24.870243 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.870247 | orchestrator | + input = "Ubuntu 24.04" 2026-04-10 00:02:24.870252 | orchestrator | + output = (known after apply) 2026-04-10 00:02:24.870257 | orchestrator | } 2026-04-10 00:02:24.870262 | orchestrator | 2026-04-10 00:02:24.870267 | orchestrator | # terraform_data.image_node will be created 2026-04-10 00:02:24.870272 | orchestrator | + resource "terraform_data" "image_node" { 2026-04-10 00:02:24.870276 | orchestrator | + id = (known after apply) 2026-04-10 00:02:24.870281 | orchestrator | + input = "Ubuntu 24.04" 2026-04-10 00:02:24.870286 | orchestrator | + output = (known after apply) 2026-04-10 00:02:24.870291 | orchestrator | } 2026-04-10 00:02:24.870296 | orchestrator | 2026-04-10 00:02:24.870301 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-04-10 00:02:24.870306 | orchestrator | 2026-04-10 00:02:24.870311 | orchestrator | Changes to Outputs: 2026-04-10 00:02:24.870316 | orchestrator | + manager_address = (sensitive value) 2026-04-10 00:02:24.870433 | orchestrator | + private_key = (sensitive value) 2026-04-10 00:02:25.258310 | orchestrator | terraform_data.image_node: Creating... 2026-04-10 00:02:25.258609 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=f1353a1e-2e0f-8874-dab4-4cb70393cccd] 2026-04-10 00:02:25.258624 | orchestrator | terraform_data.image: Creating... 2026-04-10 00:02:25.262175 | orchestrator | terraform_data.image: Creation complete after 0s [id=bba209dd-af12-9371-50eb-34f32a2ff945] 2026-04-10 00:02:25.320965 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-04-10 00:02:25.347248 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-04-10 00:02:25.357409 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-04-10 00:02:25.357721 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-04-10 00:02:25.358563 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-04-10 00:02:25.358667 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-04-10 00:02:25.359601 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-04-10 00:02:25.360710 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-04-10 00:02:25.363756 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-04-10 00:02:25.373388 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-04-10 00:02:25.849910 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-04-10 00:02:25.852705 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-04-10 00:02:25.870924 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-10 00:02:25.880407 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-04-10 00:02:25.883989 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-10 00:02:25.892907 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-04-10 00:02:27.114206 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 2s [id=c3db7ccb-d70f-4916-bd8f-87229fc8eaa2] 2026-04-10 00:02:27.129969 | orchestrator | local_file.id_rsa_pub: Creating... 2026-04-10 00:02:27.133568 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=e99b2a4a2ddbb832b19492bf0fd5c200b34228f4] 2026-04-10 00:02:27.156371 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-04-10 00:02:27.169473 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=5e6bded12ebea4402fe96b761970a046b74363a2] 2026-04-10 00:02:27.183178 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-04-10 00:02:28.995541 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=433cfae2-239d-480b-959d-b8cd36270ab8] 2026-04-10 00:02:28.996688 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=42dd6803-c84e-4757-aa8c-571b5d9cbc16] 2026-04-10 00:02:29.000356 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-04-10 00:02:29.003219 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-04-10 00:02:29.006692 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=7df1152f-d9d4-4643-860e-92853d20f14a] 2026-04-10 00:02:29.013474 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-04-10 00:02:29.031361 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=c799235e-1f4d-413e-847e-76a649e6822e] 2026-04-10 00:02:29.039107 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-04-10 00:02:29.048136 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=9b5f2139-44b1-4420-a83a-35d7b8e164cf] 2026-04-10 00:02:29.056617 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-04-10 00:02:29.059916 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=02e5e60d-aa8c-49f3-b265-76760abc52dd] 2026-04-10 00:02:29.070158 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-04-10 00:02:29.127720 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=abddbba1-0dc8-4b4d-8c33-018af0530e23] 2026-04-10 00:02:29.132469 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=83ad9ae7-b217-4c7b-97e6-a7d535a7d755] 2026-04-10 00:02:29.138589 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-04-10 00:02:29.158836 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=a4e1216f-fa74-4126-b451-31b29817bdec] 2026-04-10 00:02:30.095055 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=a56d5cf5-73bc-4ca9-bd08-1ca5ad43499d] 2026-04-10 00:02:30.110109 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-04-10 00:02:30.535624 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=42cd5d99-e3bc-4fd6-8c1c-1122c3dc2e2d] 2026-04-10 00:02:32.407421 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=5f24fb99-990b-48ee-9c5e-76fec810005b] 2026-04-10 00:02:33.061879 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=f013a88d-cf1a-4ed1-a814-e61af314bdae] 2026-04-10 00:02:33.061915 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=eb5308f5-155d-4496-84bb-67ce0f294762] 2026-04-10 00:02:33.061927 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21] 2026-04-10 00:02:33.061938 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=00f505f9-c68a-4ecb-966e-715e991ccb80] 2026-04-10 00:02:33.061949 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=6703ea0b-6978-4dc9-b5ac-852738c6c355] 2026-04-10 00:02:33.612906 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=fb0b096c-6855-47c0-9151-474592677993] 2026-04-10 00:02:33.619705 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-04-10 00:02:33.621492 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-04-10 00:02:33.621722 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-04-10 00:02:33.892591 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=744f38af-9239-4335-9c57-1b5b71f4cf38] 2026-04-10 00:02:33.897663 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=f71c937f-613d-419a-9899-612bd5d14834] 2026-04-10 00:02:33.903443 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-04-10 00:02:33.904998 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-04-10 00:02:33.905984 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-04-10 00:02:33.921304 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-04-10 00:02:33.921643 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-04-10 00:02:33.926940 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-04-10 00:02:33.927876 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-04-10 00:02:33.933351 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-04-10 00:02:33.933662 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-04-10 00:02:34.149476 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=f2f13570-c814-4d79-8f85-4e3c56455501] 2026-04-10 00:02:34.161924 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-04-10 00:02:34.544542 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=20ba7fce-541c-4b62-b07e-cf9b278ae5ef] 2026-04-10 00:02:34.558347 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-04-10 00:02:34.750883 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=3ffa4098-83c3-4e69-b05b-7d3f8f8ed763] 2026-04-10 00:02:34.761127 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-04-10 00:02:34.807303 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=bbacfe98-cd16-40f8-84ce-4cb4edce5b27] 2026-04-10 00:02:34.813772 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-04-10 00:02:34.942000 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=a2d9da17-ec3b-4b17-b6cf-e2aae1136b91] 2026-04-10 00:02:34.948530 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-04-10 00:02:34.996165 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=7bdbb9b3-6b95-4074-8588-e5ab449ea726] 2026-04-10 00:02:35.005359 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-04-10 00:02:35.023402 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=9b07f55f-647f-4a80-b032-92dfb014d615] 2026-04-10 00:02:35.028838 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-04-10 00:02:35.170586 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=d6aecf41-df5a-44fd-a838-e2d00c1591d3] 2026-04-10 00:02:35.199190 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=89a30cc2-0bb4-4b76-8ce1-d4d98940fcf9] 2026-04-10 00:02:35.241410 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=b44d1548-eb64-4b66-b755-658c7646f7f4] 2026-04-10 00:02:35.429376 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 0s [id=4cd8dcb4-31a2-44b2-9449-8d04daef346a] 2026-04-10 00:02:35.663576 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 2s [id=60b16459-78ca-492a-9f0c-2c3700bfaadf] 2026-04-10 00:02:35.790991 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=6b98b585-35a0-4514-940a-7b68e8f31c15] 2026-04-10 00:02:36.112499 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=fa58d1b6-8646-4157-ad52-bb297975fb0f] 2026-04-10 00:02:36.149270 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=ab0718e7-1a0a-4d98-adde-222d01013302] 2026-04-10 00:02:37.038657 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 2s [id=509890b8-bff9-40bc-8704-43691085e1f6] 2026-04-10 00:02:38.402947 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=3f38b6c1-1ba5-41a7-b47d-0bac82f55272] 2026-04-10 00:02:38.423932 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-04-10 00:02:38.443930 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-04-10 00:02:38.445706 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-04-10 00:02:38.446170 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-04-10 00:02:38.449250 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-04-10 00:02:38.455244 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-04-10 00:02:38.459561 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-04-10 00:02:41.225279 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 3s [id=51211f0e-558d-4576-882b-78eccd1f4760] 2026-04-10 00:02:41.234701 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-04-10 00:02:41.236054 | orchestrator | local_file.inventory: Creating... 2026-04-10 00:02:41.242723 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-04-10 00:02:41.297422 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=ef13ec2004fc751965cf18d22aa3d6bd1197873e] 2026-04-10 00:02:41.297702 | orchestrator | local_file.inventory: Creation complete after 0s [id=9bd5adf2ebfd488a712e042aa8fb3aaf6ef7c65f] 2026-04-10 00:02:42.003088 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=51211f0e-558d-4576-882b-78eccd1f4760] 2026-04-10 00:02:48.453545 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-04-10 00:02:48.453644 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-04-10 00:02:48.453660 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-04-10 00:02:48.453684 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-04-10 00:02:48.460039 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-04-10 00:02:48.460066 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-04-10 00:02:58.463081 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-04-10 00:02:58.463233 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-04-10 00:02:58.463251 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-04-10 00:02:58.463263 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-04-10 00:02:58.463274 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-04-10 00:02:58.463285 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-04-10 00:03:08.472516 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-04-10 00:03:08.472642 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-04-10 00:03:08.473459 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-04-10 00:03:08.473490 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-04-10 00:03:08.473500 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-04-10 00:03:08.473510 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-04-10 00:03:18.481628 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-04-10 00:03:18.481765 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-04-10 00:03:18.481774 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-04-10 00:03:18.481779 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-04-10 00:03:18.481785 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-04-10 00:03:18.481790 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-04-10 00:03:19.324784 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 41s [id=7e153dea-24f2-4366-83af-1f41670a0df6] 2026-04-10 00:03:19.380627 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 41s [id=bfcec188-8740-457f-9205-d516863e1cc5] 2026-04-10 00:03:19.421552 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 41s [id=5938d8a4-6c59-4552-91a5-a96d8e74fd0d] 2026-04-10 00:03:19.500299 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 42s [id=677aed9e-9a9f-40b2-9617-b5fbc1f917c6] 2026-04-10 00:03:28.482068 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [50s elapsed] 2026-04-10 00:03:28.482214 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [50s elapsed] 2026-04-10 00:03:29.805635 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 52s [id=290c0315-b1ae-4643-869e-7f4b4581f018] 2026-04-10 00:03:29.983324 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 52s [id=f16f4420-4402-4022-a45d-547e8fc89c0b] 2026-04-10 00:03:29.997872 | orchestrator | null_resource.node_semaphore: Creating... 2026-04-10 00:03:30.014364 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=4090906865189198411] 2026-04-10 00:03:30.018862 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-04-10 00:03:30.019586 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-04-10 00:03:30.019768 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-04-10 00:03:30.021575 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-04-10 00:03:30.021953 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-04-10 00:03:30.039504 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-04-10 00:03:30.041125 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-04-10 00:03:30.048280 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-04-10 00:03:30.048373 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-04-10 00:03:30.063859 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-04-10 00:03:33.415654 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=5938d8a4-6c59-4552-91a5-a96d8e74fd0d/433cfae2-239d-480b-959d-b8cd36270ab8] 2026-04-10 00:03:33.423905 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=677aed9e-9a9f-40b2-9617-b5fbc1f917c6/83ad9ae7-b217-4c7b-97e6-a7d535a7d755] 2026-04-10 00:03:33.539509 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=677aed9e-9a9f-40b2-9617-b5fbc1f917c6/c799235e-1f4d-413e-847e-76a649e6822e] 2026-04-10 00:03:33.557659 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=7e153dea-24f2-4366-83af-1f41670a0df6/42dd6803-c84e-4757-aa8c-571b5d9cbc16] 2026-04-10 00:03:33.585034 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=5938d8a4-6c59-4552-91a5-a96d8e74fd0d/9b5f2139-44b1-4420-a83a-35d7b8e164cf] 2026-04-10 00:03:33.640710 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=7e153dea-24f2-4366-83af-1f41670a0df6/02e5e60d-aa8c-49f3-b265-76760abc52dd] 2026-04-10 00:03:39.662547 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=677aed9e-9a9f-40b2-9617-b5fbc1f917c6/7df1152f-d9d4-4643-860e-92853d20f14a] 2026-04-10 00:03:39.681859 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=5938d8a4-6c59-4552-91a5-a96d8e74fd0d/a4e1216f-fa74-4126-b451-31b29817bdec] 2026-04-10 00:03:39.726439 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 10s [id=7e153dea-24f2-4366-83af-1f41670a0df6/abddbba1-0dc8-4b4d-8c33-018af0530e23] 2026-04-10 00:03:40.064974 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-04-10 00:03:50.066530 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-04-10 00:03:50.768354 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=fd77f455-8bea-4442-87ae-4b7584a29787] 2026-04-10 00:03:51.605430 | orchestrator | 2026-04-10 00:03:51.605512 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-04-10 00:03:51.605523 | orchestrator | 2026-04-10 00:03:51.605531 | orchestrator | Outputs: 2026-04-10 00:03:51.605539 | orchestrator | 2026-04-10 00:03:51.605546 | orchestrator | manager_address = 2026-04-10 00:03:51.605553 | orchestrator | private_key = 2026-04-10 00:03:51.829665 | orchestrator | ok: Runtime: 0:01:31.763545 2026-04-10 00:03:51.864311 | 2026-04-10 00:03:51.864460 | TASK [Create infrastructure (stable)] 2026-04-10 00:03:52.401740 | orchestrator | skipping: Conditional result was False 2026-04-10 00:03:52.424108 | 2026-04-10 00:03:52.424296 | TASK [Fetch manager address] 2026-04-10 00:03:52.994638 | orchestrator | ok 2026-04-10 00:03:53.002298 | 2026-04-10 00:03:53.002411 | TASK [Set manager_host address] 2026-04-10 00:03:53.081564 | orchestrator | ok 2026-04-10 00:03:53.089609 | 2026-04-10 00:03:53.089736 | LOOP [Update ansible collections] 2026-04-10 00:03:54.218531 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-10 00:03:54.218939 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-10 00:03:54.218995 | orchestrator | Starting galaxy collection install process 2026-04-10 00:03:54.219021 | orchestrator | Process install dependency map 2026-04-10 00:03:54.219046 | orchestrator | Starting collection install process 2026-04-10 00:03:54.219068 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons' 2026-04-10 00:03:54.219093 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons 2026-04-10 00:03:54.219130 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-04-10 00:03:54.219189 | orchestrator | ok: Item: commons Runtime: 0:00:00.770706 2026-04-10 00:03:55.514102 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-10 00:03:55.514296 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-10 00:03:55.514359 | orchestrator | Starting galaxy collection install process 2026-04-10 00:03:55.514407 | orchestrator | Process install dependency map 2026-04-10 00:03:55.514452 | orchestrator | Starting collection install process 2026-04-10 00:03:55.514571 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services' 2026-04-10 00:03:55.514621 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services 2026-04-10 00:03:55.514662 | orchestrator | osism.services:999.0.0 was installed successfully 2026-04-10 00:03:55.514725 | orchestrator | ok: Item: services Runtime: 0:00:00.930185 2026-04-10 00:03:55.540993 | 2026-04-10 00:03:55.541247 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-10 00:04:06.112347 | orchestrator | ok 2026-04-10 00:04:06.122108 | 2026-04-10 00:04:06.122226 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-10 00:05:06.174572 | orchestrator | ok 2026-04-10 00:05:06.185337 | 2026-04-10 00:05:06.185480 | TASK [Fetch manager ssh hostkey] 2026-04-10 00:05:07.762483 | orchestrator | Output suppressed because no_log was given 2026-04-10 00:05:07.779134 | 2026-04-10 00:05:07.779322 | TASK [Get ssh keypair from terraform environment] 2026-04-10 00:05:08.317281 | orchestrator | ok: Runtime: 0:00:00.006590 2026-04-10 00:05:08.339261 | 2026-04-10 00:05:08.339549 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-10 00:05:08.384891 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-04-10 00:05:08.394561 | 2026-04-10 00:05:08.394699 | TASK [Run manager part 0] 2026-04-10 00:05:09.390988 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-10 00:05:09.448914 | orchestrator | 2026-04-10 00:05:09.448957 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-04-10 00:05:09.448964 | orchestrator | 2026-04-10 00:05:09.448977 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-04-10 00:05:11.246644 | orchestrator | ok: [testbed-manager] 2026-04-10 00:05:11.246687 | orchestrator | 2026-04-10 00:05:11.246709 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-10 00:05:11.246718 | orchestrator | 2026-04-10 00:05:11.246726 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-10 00:05:13.026081 | orchestrator | ok: [testbed-manager] 2026-04-10 00:05:13.026135 | orchestrator | 2026-04-10 00:05:13.026144 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-10 00:05:13.741935 | orchestrator | ok: [testbed-manager] 2026-04-10 00:05:13.742166 | orchestrator | 2026-04-10 00:05:13.742192 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-10 00:05:13.796334 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:05:13.796393 | orchestrator | 2026-04-10 00:05:13.796402 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-04-10 00:05:13.834561 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:05:13.834634 | orchestrator | 2026-04-10 00:05:13.834645 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-04-10 00:05:13.871439 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:05:13.871510 | orchestrator | 2026-04-10 00:05:13.871520 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-04-10 00:05:14.587602 | orchestrator | changed: [testbed-manager] 2026-04-10 00:05:14.587655 | orchestrator | 2026-04-10 00:05:14.587660 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-04-10 00:08:01.322292 | orchestrator | changed: [testbed-manager] 2026-04-10 00:08:01.322370 | orchestrator | 2026-04-10 00:08:01.322381 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-10 00:09:21.430159 | orchestrator | changed: [testbed-manager] 2026-04-10 00:09:21.430714 | orchestrator | 2026-04-10 00:09:21.430745 | orchestrator | TASK [Install required packages] *********************************************** 2026-04-10 00:09:45.163915 | orchestrator | changed: [testbed-manager] 2026-04-10 00:09:45.164003 | orchestrator | 2026-04-10 00:09:45.164017 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-04-10 00:09:54.380494 | orchestrator | changed: [testbed-manager] 2026-04-10 00:09:54.380591 | orchestrator | 2026-04-10 00:09:54.380608 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-10 00:09:54.423777 | orchestrator | ok: [testbed-manager] 2026-04-10 00:09:54.423858 | orchestrator | 2026-04-10 00:09:54.423879 | orchestrator | TASK [Get current user] ******************************************************** 2026-04-10 00:09:55.222317 | orchestrator | ok: [testbed-manager] 2026-04-10 00:09:55.222449 | orchestrator | 2026-04-10 00:09:55.222456 | orchestrator | TASK [Create venv directory] *************************************************** 2026-04-10 00:09:55.949452 | orchestrator | changed: [testbed-manager] 2026-04-10 00:09:55.949544 | orchestrator | 2026-04-10 00:09:55.949563 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-04-10 00:10:02.340706 | orchestrator | changed: [testbed-manager] 2026-04-10 00:10:02.340750 | orchestrator | 2026-04-10 00:10:02.340759 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-04-10 00:10:08.232727 | orchestrator | changed: [testbed-manager] 2026-04-10 00:10:08.232826 | orchestrator | 2026-04-10 00:10:08.232855 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-04-10 00:10:10.793561 | orchestrator | changed: [testbed-manager] 2026-04-10 00:10:10.793592 | orchestrator | 2026-04-10 00:10:10.793599 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-04-10 00:10:12.481918 | orchestrator | changed: [testbed-manager] 2026-04-10 00:10:12.482065 | orchestrator | 2026-04-10 00:10:12.482089 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-04-10 00:10:13.596075 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-10 00:10:13.596216 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-10 00:10:13.596272 | orchestrator | 2026-04-10 00:10:13.596290 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-04-10 00:10:13.645185 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-10 00:10:13.645299 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-10 00:10:13.645310 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-10 00:10:13.645319 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-10 00:10:17.018830 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-10 00:10:17.018868 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-10 00:10:17.018874 | orchestrator | 2026-04-10 00:10:17.018880 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-04-10 00:10:17.639912 | orchestrator | changed: [testbed-manager] 2026-04-10 00:10:17.639983 | orchestrator | 2026-04-10 00:10:17.639996 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-04-10 00:11:40.089481 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-04-10 00:11:40.089663 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-04-10 00:11:40.089676 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-04-10 00:11:40.089684 | orchestrator | 2026-04-10 00:11:40.089693 | orchestrator | TASK [Install local collections] *********************************************** 2026-04-10 00:11:42.345226 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-04-10 00:11:42.345263 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-04-10 00:11:42.345267 | orchestrator | 2026-04-10 00:11:42.345274 | orchestrator | PLAY [Create operator user] **************************************************** 2026-04-10 00:11:42.345279 | orchestrator | 2026-04-10 00:11:42.345283 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-10 00:11:44.318899 | orchestrator | ok: [testbed-manager] 2026-04-10 00:11:44.318935 | orchestrator | 2026-04-10 00:11:44.318940 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-10 00:11:44.375859 | orchestrator | ok: [testbed-manager] 2026-04-10 00:11:44.375910 | orchestrator | 2026-04-10 00:11:44.375922 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-10 00:11:44.456388 | orchestrator | ok: [testbed-manager] 2026-04-10 00:11:44.456433 | orchestrator | 2026-04-10 00:11:44.456442 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-10 00:11:45.294425 | orchestrator | changed: [testbed-manager] 2026-04-10 00:11:45.294460 | orchestrator | 2026-04-10 00:11:45.294468 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-10 00:11:46.085984 | orchestrator | changed: [testbed-manager] 2026-04-10 00:11:46.086465 | orchestrator | 2026-04-10 00:11:46.086485 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-10 00:11:47.467643 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-04-10 00:11:47.467694 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-04-10 00:11:47.467704 | orchestrator | 2026-04-10 00:11:47.467713 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-10 00:11:48.900624 | orchestrator | changed: [testbed-manager] 2026-04-10 00:11:48.900698 | orchestrator | 2026-04-10 00:11:48.900713 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-10 00:11:50.698898 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-04-10 00:11:50.698940 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-04-10 00:11:50.698955 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-04-10 00:11:50.698961 | orchestrator | 2026-04-10 00:11:50.698968 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-10 00:11:50.766767 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:11:50.766812 | orchestrator | 2026-04-10 00:11:50.766821 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-10 00:11:50.839746 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:11:50.839781 | orchestrator | 2026-04-10 00:11:50.839787 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-10 00:11:51.418548 | orchestrator | changed: [testbed-manager] 2026-04-10 00:11:51.418607 | orchestrator | 2026-04-10 00:11:51.418620 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-10 00:11:51.502634 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:11:51.502744 | orchestrator | 2026-04-10 00:11:51.502772 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-10 00:11:52.385099 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-10 00:11:52.385191 | orchestrator | changed: [testbed-manager] 2026-04-10 00:11:52.385203 | orchestrator | 2026-04-10 00:11:52.385211 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-10 00:11:52.416827 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:11:52.416910 | orchestrator | 2026-04-10 00:11:52.416922 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-10 00:11:52.447241 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:11:52.447318 | orchestrator | 2026-04-10 00:11:52.447332 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-10 00:11:52.482435 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:11:52.482480 | orchestrator | 2026-04-10 00:11:52.482488 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-10 00:11:52.561291 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:11:52.561369 | orchestrator | 2026-04-10 00:11:52.561382 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-10 00:11:53.279636 | orchestrator | ok: [testbed-manager] 2026-04-10 00:11:53.279682 | orchestrator | 2026-04-10 00:11:53.279691 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-10 00:11:53.279698 | orchestrator | 2026-04-10 00:11:53.279707 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-10 00:11:54.682572 | orchestrator | ok: [testbed-manager] 2026-04-10 00:11:54.682659 | orchestrator | 2026-04-10 00:11:54.682675 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-04-10 00:11:55.654639 | orchestrator | changed: [testbed-manager] 2026-04-10 00:11:55.654721 | orchestrator | 2026-04-10 00:11:55.654742 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:11:55.654760 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-04-10 00:11:55.654776 | orchestrator | 2026-04-10 00:11:56.175568 | orchestrator | ok: Runtime: 0:06:47.056748 2026-04-10 00:11:56.194724 | 2026-04-10 00:11:56.194912 | TASK [Point out that the log in on the manager is now possible] 2026-04-10 00:11:56.244623 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-04-10 00:11:56.255736 | 2026-04-10 00:11:56.255959 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-10 00:11:56.293863 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-04-10 00:11:56.308589 | 2026-04-10 00:11:56.308748 | TASK [Run manager part 1 + 2] 2026-04-10 00:11:57.214783 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-10 00:11:57.271160 | orchestrator | 2026-04-10 00:11:57.271231 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-04-10 00:11:57.271239 | orchestrator | 2026-04-10 00:11:57.271252 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-10 00:12:00.100099 | orchestrator | ok: [testbed-manager] 2026-04-10 00:12:00.100341 | orchestrator | 2026-04-10 00:12:00.100406 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-04-10 00:12:00.142262 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:12:00.142348 | orchestrator | 2026-04-10 00:12:00.142365 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-10 00:12:00.185499 | orchestrator | ok: [testbed-manager] 2026-04-10 00:12:00.185555 | orchestrator | 2026-04-10 00:12:00.185562 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-10 00:12:00.226257 | orchestrator | ok: [testbed-manager] 2026-04-10 00:12:00.226307 | orchestrator | 2026-04-10 00:12:00.226314 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-10 00:12:00.295893 | orchestrator | ok: [testbed-manager] 2026-04-10 00:12:00.295948 | orchestrator | 2026-04-10 00:12:00.295955 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-10 00:12:00.361848 | orchestrator | ok: [testbed-manager] 2026-04-10 00:12:00.361901 | orchestrator | 2026-04-10 00:12:00.361908 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-10 00:12:00.409483 | orchestrator | included: /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-04-10 00:12:00.409534 | orchestrator | 2026-04-10 00:12:00.409540 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-10 00:12:01.067756 | orchestrator | ok: [testbed-manager] 2026-04-10 00:12:01.067861 | orchestrator | 2026-04-10 00:12:01.067890 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-10 00:12:01.117004 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:12:01.117088 | orchestrator | 2026-04-10 00:12:01.117104 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-10 00:12:02.484643 | orchestrator | changed: [testbed-manager] 2026-04-10 00:12:02.484726 | orchestrator | 2026-04-10 00:12:02.484740 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-10 00:12:03.047919 | orchestrator | ok: [testbed-manager] 2026-04-10 00:12:03.047990 | orchestrator | 2026-04-10 00:12:03.048003 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-10 00:12:04.176859 | orchestrator | changed: [testbed-manager] 2026-04-10 00:12:04.176945 | orchestrator | 2026-04-10 00:12:04.176966 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-10 00:12:20.326213 | orchestrator | changed: [testbed-manager] 2026-04-10 00:12:20.326275 | orchestrator | 2026-04-10 00:12:20.326282 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-10 00:12:20.986183 | orchestrator | ok: [testbed-manager] 2026-04-10 00:12:20.986250 | orchestrator | 2026-04-10 00:12:20.986266 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-10 00:12:21.038320 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:12:21.038395 | orchestrator | 2026-04-10 00:12:21.038410 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-04-10 00:12:21.989577 | orchestrator | changed: [testbed-manager] 2026-04-10 00:12:21.989638 | orchestrator | 2026-04-10 00:12:21.989648 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-04-10 00:12:22.925939 | orchestrator | changed: [testbed-manager] 2026-04-10 00:12:22.926070 | orchestrator | 2026-04-10 00:12:22.926100 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-04-10 00:12:23.484434 | orchestrator | changed: [testbed-manager] 2026-04-10 00:12:23.484829 | orchestrator | 2026-04-10 00:12:23.484857 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-04-10 00:12:23.528371 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-10 00:12:23.528435 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-10 00:12:23.528441 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-10 00:12:23.528446 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-10 00:12:25.510409 | orchestrator | changed: [testbed-manager] 2026-04-10 00:12:25.510458 | orchestrator | 2026-04-10 00:12:25.510466 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-04-10 00:12:34.223417 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-04-10 00:12:34.223534 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-04-10 00:12:34.223552 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-04-10 00:12:34.223565 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-04-10 00:12:34.223585 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-04-10 00:12:34.223597 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-04-10 00:12:34.223608 | orchestrator | 2026-04-10 00:12:34.223620 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-04-10 00:12:35.229663 | orchestrator | changed: [testbed-manager] 2026-04-10 00:12:35.229697 | orchestrator | 2026-04-10 00:12:35.229703 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-04-10 00:12:38.256781 | orchestrator | changed: [testbed-manager] 2026-04-10 00:12:38.256875 | orchestrator | 2026-04-10 00:12:38.256892 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-04-10 00:12:38.302788 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:12:38.302874 | orchestrator | 2026-04-10 00:12:38.302889 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-04-10 00:14:12.393411 | orchestrator | changed: [testbed-manager] 2026-04-10 00:14:12.393446 | orchestrator | 2026-04-10 00:14:12.393452 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-10 00:14:13.531854 | orchestrator | ok: [testbed-manager] 2026-04-10 00:14:13.531982 | orchestrator | 2026-04-10 00:14:13.531993 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:14:13.532000 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-04-10 00:14:13.532004 | orchestrator | 2026-04-10 00:14:13.929480 | orchestrator | ok: Runtime: 0:02:16.964943 2026-04-10 00:14:13.947374 | 2026-04-10 00:14:13.947612 | TASK [Reboot manager] 2026-04-10 00:14:15.494760 | orchestrator | ok: Runtime: 0:00:00.946514 2026-04-10 00:14:15.513705 | 2026-04-10 00:14:15.513879 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-10 00:14:29.367052 | orchestrator | ok 2026-04-10 00:14:29.376344 | 2026-04-10 00:14:29.376464 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-10 00:15:29.423785 | orchestrator | ok 2026-04-10 00:15:29.433484 | 2026-04-10 00:15:29.433638 | TASK [Deploy manager + bootstrap nodes] 2026-04-10 00:15:31.792544 | orchestrator | 2026-04-10 00:15:31.792657 | orchestrator | # DEPLOY MANAGER 2026-04-10 00:15:31.792666 | orchestrator | 2026-04-10 00:15:31.792672 | orchestrator | + set -e 2026-04-10 00:15:31.792677 | orchestrator | + echo 2026-04-10 00:15:31.792683 | orchestrator | + echo '# DEPLOY MANAGER' 2026-04-10 00:15:31.792690 | orchestrator | + echo 2026-04-10 00:15:31.792711 | orchestrator | + cat /opt/manager-vars.sh 2026-04-10 00:15:31.795906 | orchestrator | export NUMBER_OF_NODES=6 2026-04-10 00:15:31.795918 | orchestrator | 2026-04-10 00:15:31.795923 | orchestrator | export CEPH_VERSION=reef 2026-04-10 00:15:31.795929 | orchestrator | export CONFIGURATION_VERSION=main 2026-04-10 00:15:31.795934 | orchestrator | export MANAGER_VERSION=latest 2026-04-10 00:15:31.795944 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-04-10 00:15:31.795948 | orchestrator | 2026-04-10 00:15:31.795955 | orchestrator | export ARA=false 2026-04-10 00:15:31.795959 | orchestrator | export DEPLOY_MODE=manager 2026-04-10 00:15:31.795967 | orchestrator | export TEMPEST=true 2026-04-10 00:15:31.795971 | orchestrator | export IS_ZUUL=true 2026-04-10 00:15:31.795975 | orchestrator | 2026-04-10 00:15:31.795982 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.34 2026-04-10 00:15:31.795986 | orchestrator | export EXTERNAL_API=false 2026-04-10 00:15:31.795990 | orchestrator | 2026-04-10 00:15:31.795994 | orchestrator | export IMAGE_USER=ubuntu 2026-04-10 00:15:31.796001 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-04-10 00:15:31.796005 | orchestrator | 2026-04-10 00:15:31.796009 | orchestrator | export CEPH_STACK=ceph-ansible 2026-04-10 00:15:31.796163 | orchestrator | 2026-04-10 00:15:31.796172 | orchestrator | + echo 2026-04-10 00:15:31.796179 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-10 00:15:31.796944 | orchestrator | ++ export INTERACTIVE=false 2026-04-10 00:15:31.796951 | orchestrator | ++ INTERACTIVE=false 2026-04-10 00:15:31.796956 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-10 00:15:31.796960 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-10 00:15:31.797177 | orchestrator | + source /opt/manager-vars.sh 2026-04-10 00:15:31.797184 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-10 00:15:31.797188 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-10 00:15:31.797194 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-10 00:15:31.797198 | orchestrator | ++ CEPH_VERSION=reef 2026-04-10 00:15:31.797297 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-10 00:15:31.797303 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-10 00:15:31.797307 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-10 00:15:31.797310 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-10 00:15:31.797314 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-10 00:15:31.797323 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-10 00:15:31.797327 | orchestrator | ++ export ARA=false 2026-04-10 00:15:31.797331 | orchestrator | ++ ARA=false 2026-04-10 00:15:31.797337 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-10 00:15:31.797341 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-10 00:15:31.797344 | orchestrator | ++ export TEMPEST=true 2026-04-10 00:15:31.797348 | orchestrator | ++ TEMPEST=true 2026-04-10 00:15:31.797352 | orchestrator | ++ export IS_ZUUL=true 2026-04-10 00:15:31.797381 | orchestrator | ++ IS_ZUUL=true 2026-04-10 00:15:31.797386 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.34 2026-04-10 00:15:31.797390 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.34 2026-04-10 00:15:31.797394 | orchestrator | ++ export EXTERNAL_API=false 2026-04-10 00:15:31.797398 | orchestrator | ++ EXTERNAL_API=false 2026-04-10 00:15:31.797402 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-10 00:15:31.797406 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-10 00:15:31.797410 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-10 00:15:31.797414 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-10 00:15:31.797429 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-10 00:15:31.797433 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-10 00:15:31.797494 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-04-10 00:15:31.848605 | orchestrator | + docker version 2026-04-10 00:15:31.958811 | orchestrator | Client: Docker Engine - Community 2026-04-10 00:15:31.958888 | orchestrator | Version: 27.5.1 2026-04-10 00:15:31.958898 | orchestrator | API version: 1.47 2026-04-10 00:15:31.958908 | orchestrator | Go version: go1.22.11 2026-04-10 00:15:31.958915 | orchestrator | Git commit: 9f9e405 2026-04-10 00:15:31.958922 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-10 00:15:31.958930 | orchestrator | OS/Arch: linux/amd64 2026-04-10 00:15:31.958937 | orchestrator | Context: default 2026-04-10 00:15:31.958943 | orchestrator | 2026-04-10 00:15:31.958950 | orchestrator | Server: Docker Engine - Community 2026-04-10 00:15:31.958957 | orchestrator | Engine: 2026-04-10 00:15:31.958964 | orchestrator | Version: 27.5.1 2026-04-10 00:15:31.958971 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-04-10 00:15:31.959007 | orchestrator | Go version: go1.22.11 2026-04-10 00:15:31.959014 | orchestrator | Git commit: 4c9b3b0 2026-04-10 00:15:31.959021 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-10 00:15:31.959028 | orchestrator | OS/Arch: linux/amd64 2026-04-10 00:15:31.959035 | orchestrator | Experimental: false 2026-04-10 00:15:31.959042 | orchestrator | containerd: 2026-04-10 00:15:31.959064 | orchestrator | Version: v2.2.2 2026-04-10 00:15:31.959071 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-04-10 00:15:31.959079 | orchestrator | runc: 2026-04-10 00:15:31.959097 | orchestrator | Version: 1.3.4 2026-04-10 00:15:31.959105 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-04-10 00:15:31.959112 | orchestrator | docker-init: 2026-04-10 00:15:31.959119 | orchestrator | Version: 0.19.0 2026-04-10 00:15:31.959127 | orchestrator | GitCommit: de40ad0 2026-04-10 00:15:31.962260 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-04-10 00:15:31.972340 | orchestrator | + set -e 2026-04-10 00:15:31.972369 | orchestrator | + source /opt/manager-vars.sh 2026-04-10 00:15:31.972377 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-10 00:15:31.972384 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-10 00:15:31.972391 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-10 00:15:31.972399 | orchestrator | ++ CEPH_VERSION=reef 2026-04-10 00:15:31.972406 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-10 00:15:31.972413 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-10 00:15:31.972420 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-10 00:15:31.972426 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-10 00:15:31.972433 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-10 00:15:31.972440 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-10 00:15:31.972447 | orchestrator | ++ export ARA=false 2026-04-10 00:15:31.972453 | orchestrator | ++ ARA=false 2026-04-10 00:15:31.972464 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-10 00:15:31.972472 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-10 00:15:31.972478 | orchestrator | ++ export TEMPEST=true 2026-04-10 00:15:31.972485 | orchestrator | ++ TEMPEST=true 2026-04-10 00:15:31.972491 | orchestrator | ++ export IS_ZUUL=true 2026-04-10 00:15:31.972498 | orchestrator | ++ IS_ZUUL=true 2026-04-10 00:15:31.972505 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.34 2026-04-10 00:15:31.972512 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.34 2026-04-10 00:15:31.972518 | orchestrator | ++ export EXTERNAL_API=false 2026-04-10 00:15:31.972525 | orchestrator | ++ EXTERNAL_API=false 2026-04-10 00:15:31.972531 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-10 00:15:31.972538 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-10 00:15:31.972545 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-10 00:15:31.972552 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-10 00:15:31.972561 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-10 00:15:31.972568 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-10 00:15:31.972575 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-10 00:15:31.972842 | orchestrator | ++ export INTERACTIVE=false 2026-04-10 00:15:31.972852 | orchestrator | ++ INTERACTIVE=false 2026-04-10 00:15:31.972859 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-10 00:15:31.972868 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-10 00:15:31.973066 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-10 00:15:31.973076 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-10 00:15:31.973083 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-04-10 00:15:31.980119 | orchestrator | + set -e 2026-04-10 00:15:31.980141 | orchestrator | + VERSION=reef 2026-04-10 00:15:31.981257 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-04-10 00:15:31.986909 | orchestrator | + [[ -n ceph_version: reef ]] 2026-04-10 00:15:31.986938 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-04-10 00:15:31.992415 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-04-10 00:15:31.998495 | orchestrator | + set -e 2026-04-10 00:15:31.998991 | orchestrator | + VERSION=2024.2 2026-04-10 00:15:31.999670 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-04-10 00:15:32.003690 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-04-10 00:15:32.003720 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-04-10 00:15:32.008572 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-04-10 00:15:32.009613 | orchestrator | ++ semver latest 7.0.0 2026-04-10 00:15:32.071009 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-10 00:15:32.071089 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-10 00:15:32.071100 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-04-10 00:15:32.072037 | orchestrator | ++ semver latest 10.0.0-0 2026-04-10 00:15:32.130623 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-10 00:15:32.131547 | orchestrator | ++ semver 2024.2 2025.1 2026-04-10 00:15:32.189172 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-10 00:15:32.189246 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-04-10 00:15:32.281922 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-10 00:15:32.282833 | orchestrator | + source /opt/venv/bin/activate 2026-04-10 00:15:32.283846 | orchestrator | ++ deactivate nondestructive 2026-04-10 00:15:32.283865 | orchestrator | ++ '[' -n '' ']' 2026-04-10 00:15:32.283871 | orchestrator | ++ '[' -n '' ']' 2026-04-10 00:15:32.283875 | orchestrator | ++ hash -r 2026-04-10 00:15:32.283883 | orchestrator | ++ '[' -n '' ']' 2026-04-10 00:15:32.283887 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-10 00:15:32.283891 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-10 00:15:32.284015 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-10 00:15:32.284074 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-10 00:15:32.284080 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-10 00:15:32.284084 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-10 00:15:32.284215 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-10 00:15:32.284438 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-10 00:15:32.284445 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-10 00:15:32.284449 | orchestrator | ++ export PATH 2026-04-10 00:15:32.284486 | orchestrator | ++ '[' -n '' ']' 2026-04-10 00:15:32.284525 | orchestrator | ++ '[' -z '' ']' 2026-04-10 00:15:32.284560 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-10 00:15:32.284616 | orchestrator | ++ PS1='(venv) ' 2026-04-10 00:15:32.284622 | orchestrator | ++ export PS1 2026-04-10 00:15:32.284658 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-10 00:15:32.284663 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-10 00:15:32.284696 | orchestrator | ++ hash -r 2026-04-10 00:15:32.285198 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-04-10 00:15:35.609685 | orchestrator | 2026-04-10 00:15:35.609774 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-04-10 00:15:35.609789 | orchestrator | 2026-04-10 00:15:35.609800 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-10 00:15:36.175198 | orchestrator | ok: [testbed-manager] 2026-04-10 00:15:36.175278 | orchestrator | 2026-04-10 00:15:36.175292 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-10 00:15:37.132485 | orchestrator | changed: [testbed-manager] 2026-04-10 00:15:37.132575 | orchestrator | 2026-04-10 00:15:37.132594 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-04-10 00:15:37.132608 | orchestrator | 2026-04-10 00:15:37.132620 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-10 00:15:39.517381 | orchestrator | ok: [testbed-manager] 2026-04-10 00:15:39.517496 | orchestrator | 2026-04-10 00:15:39.517526 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-04-10 00:15:39.568306 | orchestrator | ok: [testbed-manager] 2026-04-10 00:15:39.568401 | orchestrator | 2026-04-10 00:15:39.568420 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-04-10 00:15:40.032235 | orchestrator | changed: [testbed-manager] 2026-04-10 00:15:40.032338 | orchestrator | 2026-04-10 00:15:40.032355 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-04-10 00:15:40.077414 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:15:40.077518 | orchestrator | 2026-04-10 00:15:40.077536 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-10 00:15:40.403621 | orchestrator | changed: [testbed-manager] 2026-04-10 00:15:40.403718 | orchestrator | 2026-04-10 00:15:40.403735 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-04-10 00:15:40.729571 | orchestrator | ok: [testbed-manager] 2026-04-10 00:15:40.729687 | orchestrator | 2026-04-10 00:15:40.729710 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-04-10 00:15:40.834584 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:15:40.834674 | orchestrator | 2026-04-10 00:15:40.834688 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-04-10 00:15:40.834699 | orchestrator | 2026-04-10 00:15:40.834709 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-10 00:15:42.607307 | orchestrator | ok: [testbed-manager] 2026-04-10 00:15:42.607394 | orchestrator | 2026-04-10 00:15:42.607407 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-04-10 00:15:42.707337 | orchestrator | included: osism.services.traefik for testbed-manager 2026-04-10 00:15:42.707453 | orchestrator | 2026-04-10 00:15:42.707469 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-04-10 00:15:42.761826 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-04-10 00:15:42.761920 | orchestrator | 2026-04-10 00:15:42.761935 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-04-10 00:15:43.855310 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-04-10 00:15:43.855426 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-04-10 00:15:43.855451 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-04-10 00:15:43.855472 | orchestrator | 2026-04-10 00:15:43.855494 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-04-10 00:15:45.629324 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-04-10 00:15:45.629424 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-04-10 00:15:45.629439 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-04-10 00:15:45.629452 | orchestrator | 2026-04-10 00:15:45.629465 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-04-10 00:15:46.252103 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-10 00:15:46.252201 | orchestrator | changed: [testbed-manager] 2026-04-10 00:15:46.252219 | orchestrator | 2026-04-10 00:15:46.252232 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-04-10 00:15:46.950210 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-10 00:15:46.950305 | orchestrator | changed: [testbed-manager] 2026-04-10 00:15:46.950322 | orchestrator | 2026-04-10 00:15:46.950335 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-04-10 00:15:47.011354 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:15:47.011435 | orchestrator | 2026-04-10 00:15:47.011449 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-04-10 00:15:47.380623 | orchestrator | ok: [testbed-manager] 2026-04-10 00:15:47.380742 | orchestrator | 2026-04-10 00:15:47.380757 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-04-10 00:15:47.463808 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-04-10 00:15:47.463903 | orchestrator | 2026-04-10 00:15:47.463918 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-04-10 00:15:48.624894 | orchestrator | changed: [testbed-manager] 2026-04-10 00:15:48.624989 | orchestrator | 2026-04-10 00:15:48.625006 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-04-10 00:15:49.457837 | orchestrator | changed: [testbed-manager] 2026-04-10 00:15:49.457937 | orchestrator | 2026-04-10 00:15:49.457957 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-04-10 00:16:00.596666 | orchestrator | changed: [testbed-manager] 2026-04-10 00:16:00.596774 | orchestrator | 2026-04-10 00:16:00.596808 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-04-10 00:16:00.659870 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:16:00.659972 | orchestrator | 2026-04-10 00:16:00.659990 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-04-10 00:16:00.660003 | orchestrator | 2026-04-10 00:16:00.660015 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-10 00:16:02.503531 | orchestrator | ok: [testbed-manager] 2026-04-10 00:16:02.503632 | orchestrator | 2026-04-10 00:16:02.503679 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-04-10 00:16:02.629965 | orchestrator | included: osism.services.manager for testbed-manager 2026-04-10 00:16:02.630135 | orchestrator | 2026-04-10 00:16:02.630153 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-04-10 00:16:02.698673 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-04-10 00:16:02.698760 | orchestrator | 2026-04-10 00:16:02.698774 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-04-10 00:16:05.213651 | orchestrator | ok: [testbed-manager] 2026-04-10 00:16:05.213753 | orchestrator | 2026-04-10 00:16:05.213769 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-04-10 00:16:05.271239 | orchestrator | ok: [testbed-manager] 2026-04-10 00:16:05.271323 | orchestrator | 2026-04-10 00:16:05.271339 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-04-10 00:16:05.397418 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-04-10 00:16:05.397518 | orchestrator | 2026-04-10 00:16:05.397537 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-04-10 00:16:08.254150 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-04-10 00:16:08.254249 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-04-10 00:16:08.254265 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-04-10 00:16:08.254277 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-04-10 00:16:08.254289 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-04-10 00:16:08.254300 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-04-10 00:16:08.254312 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-04-10 00:16:08.254323 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-04-10 00:16:08.254334 | orchestrator | 2026-04-10 00:16:08.254347 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-04-10 00:16:08.892464 | orchestrator | changed: [testbed-manager] 2026-04-10 00:16:08.892567 | orchestrator | 2026-04-10 00:16:08.892585 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-04-10 00:16:09.511626 | orchestrator | changed: [testbed-manager] 2026-04-10 00:16:09.511712 | orchestrator | 2026-04-10 00:16:09.511727 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-04-10 00:16:09.595201 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-04-10 00:16:09.595301 | orchestrator | 2026-04-10 00:16:09.595326 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-04-10 00:16:10.810417 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-04-10 00:16:10.810494 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-04-10 00:16:10.810507 | orchestrator | 2026-04-10 00:16:10.810520 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-04-10 00:16:11.460223 | orchestrator | changed: [testbed-manager] 2026-04-10 00:16:11.460323 | orchestrator | 2026-04-10 00:16:11.460342 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-04-10 00:16:11.522006 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:16:11.522253 | orchestrator | 2026-04-10 00:16:11.522278 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-04-10 00:16:11.595544 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-04-10 00:16:11.595643 | orchestrator | 2026-04-10 00:16:11.595661 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-04-10 00:16:12.240653 | orchestrator | changed: [testbed-manager] 2026-04-10 00:16:12.240745 | orchestrator | 2026-04-10 00:16:12.240760 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-04-10 00:16:12.294669 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-04-10 00:16:12.294783 | orchestrator | 2026-04-10 00:16:12.294798 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-04-10 00:16:13.658314 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-10 00:16:13.658413 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-10 00:16:13.658427 | orchestrator | changed: [testbed-manager] 2026-04-10 00:16:13.658440 | orchestrator | 2026-04-10 00:16:13.658451 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-04-10 00:16:14.289725 | orchestrator | changed: [testbed-manager] 2026-04-10 00:16:14.289813 | orchestrator | 2026-04-10 00:16:14.289824 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-04-10 00:16:14.346842 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:16:14.346941 | orchestrator | 2026-04-10 00:16:14.346966 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-04-10 00:16:14.446424 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-04-10 00:16:14.446516 | orchestrator | 2026-04-10 00:16:14.446533 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-04-10 00:16:14.986231 | orchestrator | changed: [testbed-manager] 2026-04-10 00:16:14.986326 | orchestrator | 2026-04-10 00:16:14.986360 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-04-10 00:16:15.398526 | orchestrator | changed: [testbed-manager] 2026-04-10 00:16:15.398583 | orchestrator | 2026-04-10 00:16:15.398598 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-04-10 00:16:16.701390 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-04-10 00:16:16.701499 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-04-10 00:16:16.701516 | orchestrator | 2026-04-10 00:16:16.701530 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-04-10 00:16:17.337549 | orchestrator | changed: [testbed-manager] 2026-04-10 00:16:17.337651 | orchestrator | 2026-04-10 00:16:17.337669 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-04-10 00:16:17.733480 | orchestrator | ok: [testbed-manager] 2026-04-10 00:16:17.733570 | orchestrator | 2026-04-10 00:16:17.733591 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-04-10 00:16:18.096905 | orchestrator | changed: [testbed-manager] 2026-04-10 00:16:18.097000 | orchestrator | 2026-04-10 00:16:18.097017 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-04-10 00:16:18.151438 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:16:18.151512 | orchestrator | 2026-04-10 00:16:18.151526 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-04-10 00:16:18.218127 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-04-10 00:16:18.218240 | orchestrator | 2026-04-10 00:16:18.218265 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-04-10 00:16:18.262823 | orchestrator | ok: [testbed-manager] 2026-04-10 00:16:18.262902 | orchestrator | 2026-04-10 00:16:18.262916 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-04-10 00:16:20.275511 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-04-10 00:16:20.275621 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-04-10 00:16:20.275637 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-04-10 00:16:20.275650 | orchestrator | 2026-04-10 00:16:20.275662 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-04-10 00:16:21.002206 | orchestrator | changed: [testbed-manager] 2026-04-10 00:16:21.002299 | orchestrator | 2026-04-10 00:16:21.002315 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-04-10 00:16:21.713626 | orchestrator | changed: [testbed-manager] 2026-04-10 00:16:21.713732 | orchestrator | 2026-04-10 00:16:21.713755 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-04-10 00:16:22.466117 | orchestrator | changed: [testbed-manager] 2026-04-10 00:16:22.466334 | orchestrator | 2026-04-10 00:16:22.466359 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-04-10 00:16:22.540069 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-04-10 00:16:22.540163 | orchestrator | 2026-04-10 00:16:22.540177 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-04-10 00:16:22.586996 | orchestrator | ok: [testbed-manager] 2026-04-10 00:16:22.587156 | orchestrator | 2026-04-10 00:16:22.587171 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-04-10 00:16:23.268781 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-04-10 00:16:23.268849 | orchestrator | 2026-04-10 00:16:23.268856 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-04-10 00:16:23.351187 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-04-10 00:16:23.351253 | orchestrator | 2026-04-10 00:16:23.351259 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-04-10 00:16:24.059455 | orchestrator | changed: [testbed-manager] 2026-04-10 00:16:24.059527 | orchestrator | 2026-04-10 00:16:24.059533 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-04-10 00:16:24.666847 | orchestrator | ok: [testbed-manager] 2026-04-10 00:16:24.666954 | orchestrator | 2026-04-10 00:16:24.666961 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-04-10 00:16:24.725999 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:16:24.726126 | orchestrator | 2026-04-10 00:16:24.726135 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-04-10 00:16:24.779247 | orchestrator | ok: [testbed-manager] 2026-04-10 00:16:24.779340 | orchestrator | 2026-04-10 00:16:24.779347 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-04-10 00:16:25.600453 | orchestrator | changed: [testbed-manager] 2026-04-10 00:16:25.600494 | orchestrator | 2026-04-10 00:16:25.600500 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-04-10 00:17:48.007639 | orchestrator | changed: [testbed-manager] 2026-04-10 00:17:48.007749 | orchestrator | 2026-04-10 00:17:48.007767 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-04-10 00:17:49.044178 | orchestrator | ok: [testbed-manager] 2026-04-10 00:17:49.044268 | orchestrator | 2026-04-10 00:17:49.044280 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-04-10 00:17:49.103063 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:17:49.103155 | orchestrator | 2026-04-10 00:17:49.103170 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-04-10 00:17:51.914822 | orchestrator | changed: [testbed-manager] 2026-04-10 00:17:51.914934 | orchestrator | 2026-04-10 00:17:51.914952 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-04-10 00:17:52.019429 | orchestrator | ok: [testbed-manager] 2026-04-10 00:17:52.019520 | orchestrator | 2026-04-10 00:17:52.019558 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-10 00:17:52.019572 | orchestrator | 2026-04-10 00:17:52.019583 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-04-10 00:17:52.072795 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:17:52.072905 | orchestrator | 2026-04-10 00:17:52.072926 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-04-10 00:18:52.121634 | orchestrator | Pausing for 60 seconds 2026-04-10 00:18:52.121757 | orchestrator | changed: [testbed-manager] 2026-04-10 00:18:52.121774 | orchestrator | 2026-04-10 00:18:52.121786 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-04-10 00:18:55.290142 | orchestrator | changed: [testbed-manager] 2026-04-10 00:18:55.290251 | orchestrator | 2026-04-10 00:18:55.290268 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-04-10 00:19:36.760164 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-04-10 00:19:36.760285 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-04-10 00:19:36.760303 | orchestrator | changed: [testbed-manager] 2026-04-10 00:19:36.760345 | orchestrator | 2026-04-10 00:19:36.760358 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-04-10 00:19:42.692716 | orchestrator | changed: [testbed-manager] 2026-04-10 00:19:42.692831 | orchestrator | 2026-04-10 00:19:42.692849 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-04-10 00:19:42.770895 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-04-10 00:19:42.771073 | orchestrator | 2026-04-10 00:19:42.771099 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-10 00:19:42.771120 | orchestrator | 2026-04-10 00:19:42.771139 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-04-10 00:19:42.823529 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:19:42.823660 | orchestrator | 2026-04-10 00:19:42.823687 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-04-10 00:19:42.911136 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-04-10 00:19:42.911233 | orchestrator | 2026-04-10 00:19:42.911248 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-04-10 00:19:43.693896 | orchestrator | changed: [testbed-manager] 2026-04-10 00:19:43.693997 | orchestrator | 2026-04-10 00:19:43.694129 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-04-10 00:19:47.044941 | orchestrator | ok: [testbed-manager] 2026-04-10 00:19:47.045919 | orchestrator | 2026-04-10 00:19:47.045998 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-04-10 00:19:47.120499 | orchestrator | ok: [testbed-manager] => { 2026-04-10 00:19:47.120622 | orchestrator | "version_check_result.stdout_lines": [ 2026-04-10 00:19:47.120640 | orchestrator | "=== OSISM Container Version Check ===", 2026-04-10 00:19:47.120652 | orchestrator | "Checking running containers against expected versions...", 2026-04-10 00:19:47.120674 | orchestrator | "", 2026-04-10 00:19:47.120696 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-04-10 00:19:47.120716 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-04-10 00:19:47.120734 | orchestrator | " Enabled: true", 2026-04-10 00:19:47.120753 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-04-10 00:19:47.120771 | orchestrator | " Status: ✅ MATCH", 2026-04-10 00:19:47.120789 | orchestrator | "", 2026-04-10 00:19:47.120806 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-04-10 00:19:47.120824 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-04-10 00:19:47.120842 | orchestrator | " Enabled: true", 2026-04-10 00:19:47.120861 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-04-10 00:19:47.120880 | orchestrator | " Status: ✅ MATCH", 2026-04-10 00:19:47.120898 | orchestrator | "", 2026-04-10 00:19:47.120917 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-04-10 00:19:47.120936 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-04-10 00:19:47.120955 | orchestrator | " Enabled: true", 2026-04-10 00:19:47.120975 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-04-10 00:19:47.120990 | orchestrator | " Status: ✅ MATCH", 2026-04-10 00:19:47.121010 | orchestrator | "", 2026-04-10 00:19:47.121063 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-04-10 00:19:47.121082 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-04-10 00:19:47.121103 | orchestrator | " Enabled: true", 2026-04-10 00:19:47.121122 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-04-10 00:19:47.121142 | orchestrator | " Status: ✅ MATCH", 2026-04-10 00:19:47.121162 | orchestrator | "", 2026-04-10 00:19:47.121181 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-04-10 00:19:47.121202 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-04-10 00:19:47.121241 | orchestrator | " Enabled: true", 2026-04-10 00:19:47.121256 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-04-10 00:19:47.121268 | orchestrator | " Status: ✅ MATCH", 2026-04-10 00:19:47.121280 | orchestrator | "", 2026-04-10 00:19:47.121293 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-04-10 00:19:47.121306 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-10 00:19:47.121319 | orchestrator | " Enabled: true", 2026-04-10 00:19:47.121332 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-10 00:19:47.121345 | orchestrator | " Status: ✅ MATCH", 2026-04-10 00:19:47.121358 | orchestrator | "", 2026-04-10 00:19:47.121370 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-04-10 00:19:47.121381 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-10 00:19:47.121392 | orchestrator | " Enabled: true", 2026-04-10 00:19:47.121403 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-10 00:19:47.121414 | orchestrator | " Status: ✅ MATCH", 2026-04-10 00:19:47.121424 | orchestrator | "", 2026-04-10 00:19:47.121435 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-04-10 00:19:47.121446 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-10 00:19:47.121457 | orchestrator | " Enabled: true", 2026-04-10 00:19:47.121468 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-10 00:19:47.121479 | orchestrator | " Status: ✅ MATCH", 2026-04-10 00:19:47.121490 | orchestrator | "", 2026-04-10 00:19:47.121512 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-04-10 00:19:47.121523 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-04-10 00:19:47.121539 | orchestrator | " Enabled: true", 2026-04-10 00:19:47.121551 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-04-10 00:19:47.121562 | orchestrator | " Status: ✅ MATCH", 2026-04-10 00:19:47.121573 | orchestrator | "", 2026-04-10 00:19:47.121584 | orchestrator | "Checking service: redis (Redis Cache)", 2026-04-10 00:19:47.121595 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-10 00:19:47.121606 | orchestrator | " Enabled: true", 2026-04-10 00:19:47.121617 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-10 00:19:47.121628 | orchestrator | " Status: ✅ MATCH", 2026-04-10 00:19:47.121638 | orchestrator | "", 2026-04-10 00:19:47.121649 | orchestrator | "Checking service: api (OSISM API Service)", 2026-04-10 00:19:47.121660 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-10 00:19:47.121670 | orchestrator | " Enabled: true", 2026-04-10 00:19:47.121682 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-10 00:19:47.121692 | orchestrator | " Status: ✅ MATCH", 2026-04-10 00:19:47.121703 | orchestrator | "", 2026-04-10 00:19:47.121714 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-04-10 00:19:47.121724 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-10 00:19:47.121735 | orchestrator | " Enabled: true", 2026-04-10 00:19:47.121746 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-10 00:19:47.121757 | orchestrator | " Status: ✅ MATCH", 2026-04-10 00:19:47.121767 | orchestrator | "", 2026-04-10 00:19:47.121778 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-04-10 00:19:47.121807 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-10 00:19:47.121819 | orchestrator | " Enabled: true", 2026-04-10 00:19:47.121829 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-10 00:19:47.121840 | orchestrator | " Status: ✅ MATCH", 2026-04-10 00:19:47.121851 | orchestrator | "", 2026-04-10 00:19:47.121862 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-04-10 00:19:47.121872 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-10 00:19:47.121889 | orchestrator | " Enabled: true", 2026-04-10 00:19:47.121907 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-10 00:19:47.121926 | orchestrator | " Status: ✅ MATCH", 2026-04-10 00:19:47.121954 | orchestrator | "", 2026-04-10 00:19:47.121972 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-04-10 00:19:47.122124 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-10 00:19:47.122148 | orchestrator | " Enabled: true", 2026-04-10 00:19:47.122160 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-10 00:19:47.122171 | orchestrator | " Status: ✅ MATCH", 2026-04-10 00:19:47.122182 | orchestrator | "", 2026-04-10 00:19:47.122193 | orchestrator | "=== Summary ===", 2026-04-10 00:19:47.122204 | orchestrator | "Errors (version mismatches): 0", 2026-04-10 00:19:47.122215 | orchestrator | "Warnings (expected containers not running): 0", 2026-04-10 00:19:47.122226 | orchestrator | "", 2026-04-10 00:19:47.122237 | orchestrator | "✅ All running containers match expected versions!" 2026-04-10 00:19:47.122248 | orchestrator | ] 2026-04-10 00:19:47.122260 | orchestrator | } 2026-04-10 00:19:47.122271 | orchestrator | 2026-04-10 00:19:47.122283 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-04-10 00:19:47.182578 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:19:47.182666 | orchestrator | 2026-04-10 00:19:47.182674 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:19:47.182680 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-04-10 00:19:47.182685 | orchestrator | 2026-04-10 00:19:47.296162 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-10 00:19:47.296266 | orchestrator | + deactivate 2026-04-10 00:19:47.296282 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-10 00:19:47.296299 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-10 00:19:47.296311 | orchestrator | + export PATH 2026-04-10 00:19:47.296322 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-10 00:19:47.296334 | orchestrator | + '[' -n '' ']' 2026-04-10 00:19:47.296345 | orchestrator | + hash -r 2026-04-10 00:19:47.296357 | orchestrator | + '[' -n '' ']' 2026-04-10 00:19:47.296367 | orchestrator | + unset VIRTUAL_ENV 2026-04-10 00:19:47.296378 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-10 00:19:47.296389 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-10 00:19:47.296400 | orchestrator | + unset -f deactivate 2026-04-10 00:19:47.296411 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-04-10 00:19:47.303425 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-10 00:19:47.303456 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-10 00:19:47.303468 | orchestrator | + local max_attempts=60 2026-04-10 00:19:47.303479 | orchestrator | + local name=ceph-ansible 2026-04-10 00:19:47.303491 | orchestrator | + local attempt_num=1 2026-04-10 00:19:47.303910 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-10 00:19:47.341274 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-10 00:19:47.341348 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-10 00:19:47.341361 | orchestrator | + local max_attempts=60 2026-04-10 00:19:47.341374 | orchestrator | + local name=kolla-ansible 2026-04-10 00:19:47.341385 | orchestrator | + local attempt_num=1 2026-04-10 00:19:47.342228 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-10 00:19:47.370279 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-10 00:19:47.370334 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-10 00:19:47.370350 | orchestrator | + local max_attempts=60 2026-04-10 00:19:47.370363 | orchestrator | + local name=osism-ansible 2026-04-10 00:19:47.370375 | orchestrator | + local attempt_num=1 2026-04-10 00:19:47.371218 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-10 00:19:47.399801 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-10 00:19:47.399895 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-10 00:19:47.399910 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-10 00:19:48.109481 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-04-10 00:19:48.316308 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-04-10 00:19:48.316433 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-04-10 00:19:48.316449 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-04-10 00:19:48.316459 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-04-10 00:19:48.316471 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-04-10 00:19:48.316481 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2026-04-10 00:19:48.316490 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2026-04-10 00:19:48.316500 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 53 seconds (healthy) 2026-04-10 00:19:48.316526 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2026-04-10 00:19:48.316536 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2026-04-10 00:19:48.316546 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2026-04-10 00:19:48.316556 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2026-04-10 00:19:48.316565 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-04-10 00:19:48.316575 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-04-10 00:19:48.316585 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-04-10 00:19:48.316595 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2026-04-10 00:19:48.321408 | orchestrator | ++ semver latest 7.0.0 2026-04-10 00:19:48.360728 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-10 00:19:48.360801 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-10 00:19:48.360816 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-04-10 00:19:48.364078 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-04-10 00:20:00.962744 | orchestrator | 2026-04-10 00:20:00 | INFO  | Prepare task for execution of resolvconf. 2026-04-10 00:20:01.130401 | orchestrator | 2026-04-10 00:20:01 | INFO  | Task 99277cb7-1fff-407f-8a99-3f28669d0742 (resolvconf) was prepared for execution. 2026-04-10 00:20:01.130514 | orchestrator | 2026-04-10 00:20:01 | INFO  | It takes a moment until task 99277cb7-1fff-407f-8a99-3f28669d0742 (resolvconf) has been started and output is visible here. 2026-04-10 00:20:13.414760 | orchestrator | 2026-04-10 00:20:13.414877 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-04-10 00:20:13.414893 | orchestrator | 2026-04-10 00:20:13.414903 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-10 00:20:13.414915 | orchestrator | Friday 10 April 2026 00:20:04 +0000 (0:00:00.174) 0:00:00.174 ********** 2026-04-10 00:20:13.414925 | orchestrator | ok: [testbed-manager] 2026-04-10 00:20:13.414937 | orchestrator | 2026-04-10 00:20:13.414947 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-10 00:20:13.414958 | orchestrator | Friday 10 April 2026 00:20:07 +0000 (0:00:03.475) 0:00:03.650 ********** 2026-04-10 00:20:13.414968 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:20:13.414978 | orchestrator | 2026-04-10 00:20:13.414988 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-10 00:20:13.414998 | orchestrator | Friday 10 April 2026 00:20:07 +0000 (0:00:00.059) 0:00:03.709 ********** 2026-04-10 00:20:13.415008 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-04-10 00:20:13.415116 | orchestrator | 2026-04-10 00:20:13.415128 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-10 00:20:13.415138 | orchestrator | Friday 10 April 2026 00:20:07 +0000 (0:00:00.084) 0:00:03.794 ********** 2026-04-10 00:20:13.415159 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-04-10 00:20:13.415170 | orchestrator | 2026-04-10 00:20:13.415180 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-10 00:20:13.415190 | orchestrator | Friday 10 April 2026 00:20:07 +0000 (0:00:00.065) 0:00:03.859 ********** 2026-04-10 00:20:13.415200 | orchestrator | ok: [testbed-manager] 2026-04-10 00:20:13.415210 | orchestrator | 2026-04-10 00:20:13.415220 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-10 00:20:13.415230 | orchestrator | Friday 10 April 2026 00:20:08 +0000 (0:00:01.036) 0:00:04.896 ********** 2026-04-10 00:20:13.415240 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:20:13.415250 | orchestrator | 2026-04-10 00:20:13.415260 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-10 00:20:13.415270 | orchestrator | Friday 10 April 2026 00:20:08 +0000 (0:00:00.052) 0:00:04.948 ********** 2026-04-10 00:20:13.415279 | orchestrator | ok: [testbed-manager] 2026-04-10 00:20:13.415289 | orchestrator | 2026-04-10 00:20:13.415301 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-10 00:20:13.415312 | orchestrator | Friday 10 April 2026 00:20:09 +0000 (0:00:00.487) 0:00:05.435 ********** 2026-04-10 00:20:13.415323 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:20:13.415335 | orchestrator | 2026-04-10 00:20:13.415346 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-10 00:20:13.415358 | orchestrator | Friday 10 April 2026 00:20:09 +0000 (0:00:00.070) 0:00:05.506 ********** 2026-04-10 00:20:13.415369 | orchestrator | changed: [testbed-manager] 2026-04-10 00:20:13.415381 | orchestrator | 2026-04-10 00:20:13.415392 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-10 00:20:13.415404 | orchestrator | Friday 10 April 2026 00:20:09 +0000 (0:00:00.594) 0:00:06.100 ********** 2026-04-10 00:20:13.415414 | orchestrator | changed: [testbed-manager] 2026-04-10 00:20:13.415426 | orchestrator | 2026-04-10 00:20:13.415437 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-10 00:20:13.415448 | orchestrator | Friday 10 April 2026 00:20:11 +0000 (0:00:01.117) 0:00:07.218 ********** 2026-04-10 00:20:13.415459 | orchestrator | ok: [testbed-manager] 2026-04-10 00:20:13.415471 | orchestrator | 2026-04-10 00:20:13.415504 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-10 00:20:13.415515 | orchestrator | Friday 10 April 2026 00:20:12 +0000 (0:00:00.997) 0:00:08.216 ********** 2026-04-10 00:20:13.415527 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-04-10 00:20:13.415538 | orchestrator | 2026-04-10 00:20:13.415549 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-10 00:20:13.415560 | orchestrator | Friday 10 April 2026 00:20:12 +0000 (0:00:00.078) 0:00:08.294 ********** 2026-04-10 00:20:13.415571 | orchestrator | changed: [testbed-manager] 2026-04-10 00:20:13.415582 | orchestrator | 2026-04-10 00:20:13.415593 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:20:13.415606 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-10 00:20:13.415618 | orchestrator | 2026-04-10 00:20:13.415629 | orchestrator | 2026-04-10 00:20:13.415638 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:20:13.415648 | orchestrator | Friday 10 April 2026 00:20:13 +0000 (0:00:01.136) 0:00:09.431 ********** 2026-04-10 00:20:13.415658 | orchestrator | =============================================================================== 2026-04-10 00:20:13.415668 | orchestrator | Gathering Facts --------------------------------------------------------- 3.48s 2026-04-10 00:20:13.415678 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.14s 2026-04-10 00:20:13.415687 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.12s 2026-04-10 00:20:13.415697 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.04s 2026-04-10 00:20:13.415707 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.00s 2026-04-10 00:20:13.415717 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.59s 2026-04-10 00:20:13.415744 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.49s 2026-04-10 00:20:13.415754 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-04-10 00:20:13.415764 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-04-10 00:20:13.415774 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2026-04-10 00:20:13.415784 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2026-04-10 00:20:13.415793 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-04-10 00:20:13.415803 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2026-04-10 00:20:13.544276 | orchestrator | + osism apply sshconfig 2026-04-10 00:20:24.823932 | orchestrator | 2026-04-10 00:20:24 | INFO  | Prepare task for execution of sshconfig. 2026-04-10 00:20:24.909655 | orchestrator | 2026-04-10 00:20:24 | INFO  | Task 3dc33531-0ee7-4fdd-859d-c24597298633 (sshconfig) was prepared for execution. 2026-04-10 00:20:24.909736 | orchestrator | 2026-04-10 00:20:24 | INFO  | It takes a moment until task 3dc33531-0ee7-4fdd-859d-c24597298633 (sshconfig) has been started and output is visible here. 2026-04-10 00:20:34.956195 | orchestrator | 2026-04-10 00:20:34.956307 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-04-10 00:20:34.956325 | orchestrator | 2026-04-10 00:20:34.956337 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-04-10 00:20:34.956349 | orchestrator | Friday 10 April 2026 00:20:27 +0000 (0:00:00.143) 0:00:00.143 ********** 2026-04-10 00:20:34.956361 | orchestrator | ok: [testbed-manager] 2026-04-10 00:20:34.956373 | orchestrator | 2026-04-10 00:20:34.956384 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-04-10 00:20:34.956395 | orchestrator | Friday 10 April 2026 00:20:28 +0000 (0:00:00.911) 0:00:01.054 ********** 2026-04-10 00:20:34.956433 | orchestrator | changed: [testbed-manager] 2026-04-10 00:20:34.956446 | orchestrator | 2026-04-10 00:20:34.956457 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-04-10 00:20:34.956468 | orchestrator | Friday 10 April 2026 00:20:29 +0000 (0:00:00.493) 0:00:01.548 ********** 2026-04-10 00:20:34.956479 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-04-10 00:20:34.956490 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-04-10 00:20:34.956501 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-04-10 00:20:34.956513 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-04-10 00:20:34.956523 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-04-10 00:20:34.956534 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-04-10 00:20:34.956545 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-04-10 00:20:34.956556 | orchestrator | 2026-04-10 00:20:34.956567 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-04-10 00:20:34.956578 | orchestrator | Friday 10 April 2026 00:20:34 +0000 (0:00:05.167) 0:00:06.715 ********** 2026-04-10 00:20:34.956589 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:20:34.956600 | orchestrator | 2026-04-10 00:20:34.956611 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-04-10 00:20:34.956622 | orchestrator | Friday 10 April 2026 00:20:34 +0000 (0:00:00.099) 0:00:06.815 ********** 2026-04-10 00:20:34.956633 | orchestrator | changed: [testbed-manager] 2026-04-10 00:20:34.956644 | orchestrator | 2026-04-10 00:20:34.956655 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:20:34.956668 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-10 00:20:34.956679 | orchestrator | 2026-04-10 00:20:34.956691 | orchestrator | 2026-04-10 00:20:34.956702 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:20:34.956713 | orchestrator | Friday 10 April 2026 00:20:34 +0000 (0:00:00.546) 0:00:07.361 ********** 2026-04-10 00:20:34.956724 | orchestrator | =============================================================================== 2026-04-10 00:20:34.956737 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.17s 2026-04-10 00:20:34.956749 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.91s 2026-04-10 00:20:34.956761 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.55s 2026-04-10 00:20:34.956774 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.49s 2026-04-10 00:20:34.956787 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.10s 2026-04-10 00:20:35.063438 | orchestrator | + osism apply known-hosts 2026-04-10 00:20:46.340581 | orchestrator | 2026-04-10 00:20:46 | INFO  | Prepare task for execution of known-hosts. 2026-04-10 00:20:46.412580 | orchestrator | 2026-04-10 00:20:46 | INFO  | Task 8ea9fe37-67d5-4e2d-b816-4fe435d94d6c (known-hosts) was prepared for execution. 2026-04-10 00:20:46.412695 | orchestrator | 2026-04-10 00:20:46 | INFO  | It takes a moment until task 8ea9fe37-67d5-4e2d-b816-4fe435d94d6c (known-hosts) has been started and output is visible here. 2026-04-10 00:21:01.995661 | orchestrator | 2026-04-10 00:21:01.995784 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-04-10 00:21:01.995801 | orchestrator | 2026-04-10 00:21:01.995814 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-04-10 00:21:01.995827 | orchestrator | Friday 10 April 2026 00:20:49 +0000 (0:00:00.191) 0:00:00.191 ********** 2026-04-10 00:21:01.995839 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-10 00:21:01.995850 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-10 00:21:01.995862 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-10 00:21:01.995896 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-10 00:21:01.995907 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-10 00:21:01.995918 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-10 00:21:01.995929 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-10 00:21:01.995940 | orchestrator | 2026-04-10 00:21:01.995951 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-04-10 00:21:01.995963 | orchestrator | Friday 10 April 2026 00:20:56 +0000 (0:00:06.658) 0:00:06.850 ********** 2026-04-10 00:21:01.995986 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-10 00:21:01.996002 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-10 00:21:01.996047 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-10 00:21:01.996066 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-10 00:21:01.996078 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-10 00:21:01.996089 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-10 00:21:01.996100 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-10 00:21:01.996111 | orchestrator | 2026-04-10 00:21:01.996123 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-10 00:21:01.996136 | orchestrator | Friday 10 April 2026 00:20:56 +0000 (0:00:00.172) 0:00:07.023 ********** 2026-04-10 00:21:01.996149 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGiOxIr5PQ9NBGYvLfPPXZIqaUyG8pgL3RuiYlc4MfwT02HpXnVSBXD1UhBlyKB0VH5Xlbl//50YytrE/+wtrKI=) 2026-04-10 00:21:01.996168 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQClzu3FG2smfCE+SdFIaFznjbh3u8BLLouWEikgGJq1BSkLwnrwEjsDt62jMubKiWswMt0PKhV62myBs4PmfnOFtdSmeHWKXUDchyX4IoH8cAfDB+EUnIVAQUVEDR73LMpwI1t9MyK62t5G2mj5dGFNoMvbgJpjPPecQY21JMCgFIlwEuz+/HMo88AIt5AguK7Dx9dfBnGILRdm1u5/vDW3jZA8o6j78tZXvHzSmKKZLr58tAWh2Rf6a2t4rYXKkoUEZIZKGkLxZEclmhCR6CHrtwuI5qZZjps73rexz4i1NX43CFM9+r9Pn6uEmmOyOlSpNCdKp9XrQBbOJXobCrj/mfQxJUC3kxZlLi22cMTm/8Qr4AvZPTfbA3ZibkXIs0BNUg/kZ7Vb8Y+HCl+9gDOVrgp5rcmWIChmFl0giU+sgw/KkqRP0xH1BHFyBPrxSUbidkycBQqn171Nc5s2RsjQXrey4AyISFewrs2Gn52/e/vYTHFLmj1AeFB36SmVjd0=) 2026-04-10 00:21:01.996185 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIdXSFJHeFTdfLt5c5k0fgSU8OX/Isqx6JrztSpsHSKL) 2026-04-10 00:21:01.996200 | orchestrator | 2026-04-10 00:21:01.996213 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-10 00:21:01.996226 | orchestrator | Friday 10 April 2026 00:20:57 +0000 (0:00:01.283) 0:00:08.306 ********** 2026-04-10 00:21:01.996241 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM/M3+aRVs/i+FcgNaLnxXTtr+W7vgSB17qG1HsQ/Nz7d9TTNHVzNc0PCNpR7oPxw1CWrobDC9VrojhIutxcVNw=) 2026-04-10 00:21:01.996283 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCTEwWFvL6C53oGGAGX2fTQ9feneNqwhNWkaN/0R0Tisr1P0o3NnYLQzw2SYdJMiOsGcIqO1xeYO/EIbJ+09cX1Jv0pYt6OPsKSzrfExaS2qAh8N46iUpEaFODO72Dpv5+Fkf45WqHEN7FUTWNDXTS4EvFd06BnkIIn+XpZAjV0caLEkzcQHizqSLTa2/soJ2R+Am5fHgyNZ2vbNOHJNaH2nBIpTbB6WdcjlS2QxhLsmEJTNxgvAAQ0uMTQx7+K+htStYbccWi6aMzhn1kHdSuYBOEtwNCKWqM3dyxglREeWHDKwM3d/laTQP6s2lsqnwFi91sLwxEZFv2+PavCknI0UGDSpMBsQEtzbGx0abd4w5HDQ1w+b+AwyEy+PauxCW4k0Heua3Zlkh//G3ZFnkWy+0R1soSM9XVcJaMynoMtVECt8iO9RPFgHtIxNdlT9E/j2uYB4WLW/6FIy6ONcuV+rjUs29wNSZ+SPQlU0V71JmH3KFNKPiLFdHmU8xQnz7E=) 2026-04-10 00:21:01.996308 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFhwZDY30SPiy9rcRZXYGx1blS44A3kwhp9420KCm8Gb) 2026-04-10 00:21:01.996321 | orchestrator | 2026-04-10 00:21:01.996334 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-10 00:21:01.996347 | orchestrator | Friday 10 April 2026 00:20:58 +0000 (0:00:01.052) 0:00:09.358 ********** 2026-04-10 00:21:01.996360 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMwvX5Ui4b01DvUUKp8z8E/skbGGxtVBjI6xmUYlwkirNfkYFcfkjDkKx2WWKrj4/OvNiFaFZwPYWCXcC+7cvsY=) 2026-04-10 00:21:01.996374 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCWhe5yr7tvv0jDZLPfwiIco0ubt7zEJloUggE/batZmFA4XgOlOKUqEahGpTTqG0TxoYGX+0Iu+cv4RO6dYLf6oDU7Y4wjxdcp0AS1uFUHadJ5rTVdFV60HWsLWpwral6YPV7lvlAhNq1LMJijqNuqrFujMGJ3DG8UErJ++ijt1yx3/Je8oFnyfH47hrFWr7p3UUjgYadct6v3hZn3ZuU42NkhZS4PKbdxtJoBTWKqF/BZui/8HrIfrWDe+bHdffO9VPWhxXmjyZTno5wGmbGBQvnEJOhknqvhJpspu4/QApUc9fdA182FnEJcLwkHnUzphtsg9DjGRT/JSiaaijRrtzA8yzAtVMm2izC442OANZZ5YJ1d6l8za5tQlvwqaSzdhLZtKozo+g/H5LwxPNoKZ+b6oDejF6BSVa9UcsgrXKa6GJPk50bF5Ryjforczi9mn8HSxO2StzlPPmTOqWbQq0IE+ti/wEcCcF0Wo5AnR7tOKiDti8q+DummdD1HTQs=) 2026-04-10 00:21:01.996480 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGCW+DKi7vPOkcvFj/BruyeuogtfJvrzF5eL5bTQB7Ua) 2026-04-10 00:21:01.996515 | orchestrator | 2026-04-10 00:21:01.996533 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-10 00:21:01.996551 | orchestrator | Friday 10 April 2026 00:20:59 +0000 (0:00:01.002) 0:00:10.361 ********** 2026-04-10 00:21:01.996577 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP1B4KGfuXZ3hjyyXPJ2mANw2oH2DqGv5c8Jt9Njc5rF0ukxPG9GSza7mvpmxuc4KNLr9+8lZPQb+CS/FvMK+Rg=) 2026-04-10 00:21:01.996595 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCf5rNWy7bIVsWkg4C6mOU9P3NE1j/vHd2wkyOdA73DWksnMJOCgqbTJZdBGl7N8ggF1xTyW2sawAjEt4LRPRcZM93BANoFMmvG+1zla/Foy183IuSJITx4sc3h7CTglaUr/aKlmmMthRZN3jhEkCmMGK9dJipKjKZboKrdw8LfFCy19iWTfaDULaLh4xq3VD3q6wMhfkkomoT9VpAJ8FrnCzdECWxZqWOoPxFO4Vooru0T3Q3WQMWngUGYgCQjOMLhtuurjaJSm7rHfXpA0Ptdq0pgMNvubG9tw4Sob2ACx4tGBnsANQt3j8orrhQBlEWVaa3Clt8AyLG0gwwlIpF0EwSPlqAkJAZd1CLL/y8xkYj2XNwDuw3MeL9UfhNKwW0PXtbyHwT+6jnWVrqlNQcFIAZAlWq54K5Z88k5hNTldZScg4ikItyfPahtTQNyLhzPj46Ub2KxXvHWwQw0P24jLscH/iGQDwKaDonkWJ1OgaQkIYpSIyWoY8hWWwxBiK8=) 2026-04-10 00:21:01.996615 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII98rSHQ1WpQ/dtLtRh3rpU/EE/cDXY29GOXpjHF5zyJ) 2026-04-10 00:21:01.996633 | orchestrator | 2026-04-10 00:21:01.996650 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-10 00:21:01.996668 | orchestrator | Friday 10 April 2026 00:21:00 +0000 (0:00:00.902) 0:00:11.263 ********** 2026-04-10 00:21:01.996690 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/SO1Nom6wsZFAQadDHZ1R/ZqOakGpHIgBMLbLCKwWvIiatOzs/VS/Tf5g1htqBxSVdm6Rx1zlpB1jS4lF+HY2D3LQon51ZQRn9TbPEkXL+OXx1225wsN6GDMpTc57Z76ncDDlXXUiXR/TMiZSteAF2IHeYV9XEeGeLmTUkrF/EW1j1AJZDjUaBWWACn137HKOYTTNYs280mNDOLluuTnFB0bdSkGTrTcKO2zeJmwgfp1edyK2TKItGXsueo+iYdvbNnzs8SDJ8tKBEyP3dMkPcygP5dXXuneEPCVBQdy09W0lcXtQYbhMo8QwDtFXDpZYLfK86+P9W5sgSf30uIqTKdy8NO58HufXFsVKB256CQuYclIdoAOcHmvuIHiYcU/TvfShClVm3lAwo+BgrfElD93/gUa6nVUL4GRwnOAfYG7qSw52UvrPUCKb2uThiQc463KbDJBl1yllQnibso2iOiSeuqr4+018h7Oo0ePkv0gM3dFPThe9Ul8zjBWgpIk=) 2026-04-10 00:21:01.996722 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIma9GH6ydSVTILM9Igq89vKEIqY3Ho2ZfatMDpCKEGQwacpFPRotgH1uuGHgYuxF2mpVp/3AK1DAZG6kp2J8H0=) 2026-04-10 00:21:01.996738 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFeb/S5ks0ARarwg3bRZ66mM4g5dS7fqymRHWS3gTz/k) 2026-04-10 00:21:01.996750 | orchestrator | 2026-04-10 00:21:01.996761 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-10 00:21:01.996772 | orchestrator | Friday 10 April 2026 00:21:01 +0000 (0:00:00.932) 0:00:12.196 ********** 2026-04-10 00:21:01.996796 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCIqCdH8lJiSc4sJpUnT5pUo3zPZbiEQqhr1Oq3Udozwsjdgl8bjSIxU72gQtzuj81wRnbex2K2f/Gk9reNy1uxzexShJqTSj2m6erxo/8/hRTFrfH8ujnG3R1qpmvg+rbG3TTf/LpGcSdREP7DuIpksnIVjxX5wQkONrSsNekjGH+hfygUhvo8/I6fo+lO9M970KIcggZCXZCFuDFQ/AV5kBAJN/48H3Jpvn0S68ZnczJs1bWusRQ+455DspSJ3aiPRLXG/rWt+hNtC4asikIY/+soQwL5/NCRHzKdJks/+xTQVPGAurea9CjFm1pJvwpYGT82W3A9SqgVABRqj54GNIb+NBqTbb1HLnwQ+rM78LFC6oijTsZsay/RTzLrx/juiVHisx+/YqjC6T02dG+lYdUsHgD7c/V8XT0CicPkddZtHg01OqH8/smhcWYpe7SR0elGkJp1ZAqAghOHmRWzZ0EHA7jLjm0ebxMdn+8YaV/fkpKxMstMwKC6/1bEdIE=) 2026-04-10 00:21:13.107069 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNeMM9Qn6ZExR/6GfIjIB5eG2HbrhmylR82nT/Zb9T/LFtG+S2W2EyEdIbGnxmkBTK25jrclT0qTaTYsUbOWaUE=) 2026-04-10 00:21:13.107189 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJezVIkuvvBYBX/sOz2gtjUlhrrP6VfISSfLsKxz11+X) 2026-04-10 00:21:13.107214 | orchestrator | 2026-04-10 00:21:13.107229 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-10 00:21:13.107245 | orchestrator | Friday 10 April 2026 00:21:02 +0000 (0:00:01.037) 0:00:13.233 ********** 2026-04-10 00:21:13.107258 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHB49trrMdNJb3bNcOtRGUTWaATZplmezLwGUfz3RlJLyuipD7rmTAhUJ+Nd1nO4zcMEci1xVnVOBDK4YtjfRkM=) 2026-04-10 00:21:13.107271 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFbUuiL4uzu9LhC6FWNR0sPfzDbRDGSJsqHvFaCoD8Ui) 2026-04-10 00:21:13.107287 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfAhv5aA09qb/btQDf8DQuNCIgBsIQ5vvHWsg2RFwZSWAnZRQY0of8rb7REgo7AdhTaLM9Rkf2gWA5ra0ilPYG734VsCo15wP2PYFxSHXR5Caemmuvp3NhIVMU4WSI9XsObrOx7GPRS+djrctMn8uKyZsg3yayLFJD/0jPS3Qhygb+9RsKaQb2+4Id6eqAo386eb+VHgKuE4AImfxfS5kUBosYAgFFdOd4B3aDUyA1F4C+kgKGLwiuulAkO0qkz+HQry6mcjlTIH6bWvnASIlajXqm3mcime/QsPYAByDvIA2z6330wk93wP0TZ+3qciMEkt1Jx1I+1QEmpd7cm3X9NntS5c0lHRLXmvvA/uuoKY5dgH2Bxh1IoalxvWN3LhLx+llStk1pLcjYq7oVl7Iq/4MwjQOZ1UtlbTFHp6v1xMOYlQHxQbCJXldkbUJxpXDLfLLrGOXXCspUD87e/71sy+gq1bh/fQPZFsbqK1pbOXGt4/EB43UDHEFVGCW3sUs=) 2026-04-10 00:21:13.107303 | orchestrator | 2026-04-10 00:21:13.107316 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-04-10 00:21:13.107330 | orchestrator | Friday 10 April 2026 00:21:03 +0000 (0:00:01.009) 0:00:14.243 ********** 2026-04-10 00:21:13.107343 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-10 00:21:13.107356 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-10 00:21:13.107369 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-10 00:21:13.107382 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-10 00:21:13.107396 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-10 00:21:13.107421 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-10 00:21:13.107434 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-10 00:21:13.107468 | orchestrator | 2026-04-10 00:21:13.107481 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-04-10 00:21:13.107494 | orchestrator | Friday 10 April 2026 00:21:08 +0000 (0:00:05.163) 0:00:19.406 ********** 2026-04-10 00:21:13.107507 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-10 00:21:13.107523 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-10 00:21:13.107535 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-10 00:21:13.107548 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-10 00:21:13.107560 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-10 00:21:13.107573 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-10 00:21:13.107588 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-10 00:21:13.107602 | orchestrator | 2026-04-10 00:21:13.107617 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-10 00:21:13.107631 | orchestrator | Friday 10 April 2026 00:21:08 +0000 (0:00:00.173) 0:00:19.580 ********** 2026-04-10 00:21:13.107678 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQClzu3FG2smfCE+SdFIaFznjbh3u8BLLouWEikgGJq1BSkLwnrwEjsDt62jMubKiWswMt0PKhV62myBs4PmfnOFtdSmeHWKXUDchyX4IoH8cAfDB+EUnIVAQUVEDR73LMpwI1t9MyK62t5G2mj5dGFNoMvbgJpjPPecQY21JMCgFIlwEuz+/HMo88AIt5AguK7Dx9dfBnGILRdm1u5/vDW3jZA8o6j78tZXvHzSmKKZLr58tAWh2Rf6a2t4rYXKkoUEZIZKGkLxZEclmhCR6CHrtwuI5qZZjps73rexz4i1NX43CFM9+r9Pn6uEmmOyOlSpNCdKp9XrQBbOJXobCrj/mfQxJUC3kxZlLi22cMTm/8Qr4AvZPTfbA3ZibkXIs0BNUg/kZ7Vb8Y+HCl+9gDOVrgp5rcmWIChmFl0giU+sgw/KkqRP0xH1BHFyBPrxSUbidkycBQqn171Nc5s2RsjQXrey4AyISFewrs2Gn52/e/vYTHFLmj1AeFB36SmVjd0=) 2026-04-10 00:21:13.107694 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGiOxIr5PQ9NBGYvLfPPXZIqaUyG8pgL3RuiYlc4MfwT02HpXnVSBXD1UhBlyKB0VH5Xlbl//50YytrE/+wtrKI=) 2026-04-10 00:21:13.107707 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIdXSFJHeFTdfLt5c5k0fgSU8OX/Isqx6JrztSpsHSKL) 2026-04-10 00:21:13.107720 | orchestrator | 2026-04-10 00:21:13.107732 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-10 00:21:13.107744 | orchestrator | Friday 10 April 2026 00:21:10 +0000 (0:00:01.036) 0:00:20.616 ********** 2026-04-10 00:21:13.107756 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM/M3+aRVs/i+FcgNaLnxXTtr+W7vgSB17qG1HsQ/Nz7d9TTNHVzNc0PCNpR7oPxw1CWrobDC9VrojhIutxcVNw=) 2026-04-10 00:21:13.107769 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCTEwWFvL6C53oGGAGX2fTQ9feneNqwhNWkaN/0R0Tisr1P0o3NnYLQzw2SYdJMiOsGcIqO1xeYO/EIbJ+09cX1Jv0pYt6OPsKSzrfExaS2qAh8N46iUpEaFODO72Dpv5+Fkf45WqHEN7FUTWNDXTS4EvFd06BnkIIn+XpZAjV0caLEkzcQHizqSLTa2/soJ2R+Am5fHgyNZ2vbNOHJNaH2nBIpTbB6WdcjlS2QxhLsmEJTNxgvAAQ0uMTQx7+K+htStYbccWi6aMzhn1kHdSuYBOEtwNCKWqM3dyxglREeWHDKwM3d/laTQP6s2lsqnwFi91sLwxEZFv2+PavCknI0UGDSpMBsQEtzbGx0abd4w5HDQ1w+b+AwyEy+PauxCW4k0Heua3Zlkh//G3ZFnkWy+0R1soSM9XVcJaMynoMtVECt8iO9RPFgHtIxNdlT9E/j2uYB4WLW/6FIy6ONcuV+rjUs29wNSZ+SPQlU0V71JmH3KFNKPiLFdHmU8xQnz7E=) 2026-04-10 00:21:13.107791 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFhwZDY30SPiy9rcRZXYGx1blS44A3kwhp9420KCm8Gb) 2026-04-10 00:21:13.107804 | orchestrator | 2026-04-10 00:21:13.107817 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-10 00:21:13.107829 | orchestrator | Friday 10 April 2026 00:21:11 +0000 (0:00:01.031) 0:00:21.648 ********** 2026-04-10 00:21:13.107842 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCWhe5yr7tvv0jDZLPfwiIco0ubt7zEJloUggE/batZmFA4XgOlOKUqEahGpTTqG0TxoYGX+0Iu+cv4RO6dYLf6oDU7Y4wjxdcp0AS1uFUHadJ5rTVdFV60HWsLWpwral6YPV7lvlAhNq1LMJijqNuqrFujMGJ3DG8UErJ++ijt1yx3/Je8oFnyfH47hrFWr7p3UUjgYadct6v3hZn3ZuU42NkhZS4PKbdxtJoBTWKqF/BZui/8HrIfrWDe+bHdffO9VPWhxXmjyZTno5wGmbGBQvnEJOhknqvhJpspu4/QApUc9fdA182FnEJcLwkHnUzphtsg9DjGRT/JSiaaijRrtzA8yzAtVMm2izC442OANZZ5YJ1d6l8za5tQlvwqaSzdhLZtKozo+g/H5LwxPNoKZ+b6oDejF6BSVa9UcsgrXKa6GJPk50bF5Ryjforczi9mn8HSxO2StzlPPmTOqWbQq0IE+ti/wEcCcF0Wo5AnR7tOKiDti8q+DummdD1HTQs=) 2026-04-10 00:21:13.107855 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMwvX5Ui4b01DvUUKp8z8E/skbGGxtVBjI6xmUYlwkirNfkYFcfkjDkKx2WWKrj4/OvNiFaFZwPYWCXcC+7cvsY=) 2026-04-10 00:21:13.107868 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGCW+DKi7vPOkcvFj/BruyeuogtfJvrzF5eL5bTQB7Ua) 2026-04-10 00:21:13.107880 | orchestrator | 2026-04-10 00:21:13.107894 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-10 00:21:13.107906 | orchestrator | Friday 10 April 2026 00:21:12 +0000 (0:00:01.020) 0:00:22.668 ********** 2026-04-10 00:21:13.107925 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCf5rNWy7bIVsWkg4C6mOU9P3NE1j/vHd2wkyOdA73DWksnMJOCgqbTJZdBGl7N8ggF1xTyW2sawAjEt4LRPRcZM93BANoFMmvG+1zla/Foy183IuSJITx4sc3h7CTglaUr/aKlmmMthRZN3jhEkCmMGK9dJipKjKZboKrdw8LfFCy19iWTfaDULaLh4xq3VD3q6wMhfkkomoT9VpAJ8FrnCzdECWxZqWOoPxFO4Vooru0T3Q3WQMWngUGYgCQjOMLhtuurjaJSm7rHfXpA0Ptdq0pgMNvubG9tw4Sob2ACx4tGBnsANQt3j8orrhQBlEWVaa3Clt8AyLG0gwwlIpF0EwSPlqAkJAZd1CLL/y8xkYj2XNwDuw3MeL9UfhNKwW0PXtbyHwT+6jnWVrqlNQcFIAZAlWq54K5Z88k5hNTldZScg4ikItyfPahtTQNyLhzPj46Ub2KxXvHWwQw0P24jLscH/iGQDwKaDonkWJ1OgaQkIYpSIyWoY8hWWwxBiK8=) 2026-04-10 00:21:13.107937 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP1B4KGfuXZ3hjyyXPJ2mANw2oH2DqGv5c8Jt9Njc5rF0ukxPG9GSza7mvpmxuc4KNLr9+8lZPQb+CS/FvMK+Rg=) 2026-04-10 00:21:13.107962 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII98rSHQ1WpQ/dtLtRh3rpU/EE/cDXY29GOXpjHF5zyJ) 2026-04-10 00:21:17.340560 | orchestrator | 2026-04-10 00:21:17.341537 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-10 00:21:17.341574 | orchestrator | Friday 10 April 2026 00:21:13 +0000 (0:00:01.054) 0:00:23.723 ********** 2026-04-10 00:21:17.341587 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIma9GH6ydSVTILM9Igq89vKEIqY3Ho2ZfatMDpCKEGQwacpFPRotgH1uuGHgYuxF2mpVp/3AK1DAZG6kp2J8H0=) 2026-04-10 00:21:17.341601 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/SO1Nom6wsZFAQadDHZ1R/ZqOakGpHIgBMLbLCKwWvIiatOzs/VS/Tf5g1htqBxSVdm6Rx1zlpB1jS4lF+HY2D3LQon51ZQRn9TbPEkXL+OXx1225wsN6GDMpTc57Z76ncDDlXXUiXR/TMiZSteAF2IHeYV9XEeGeLmTUkrF/EW1j1AJZDjUaBWWACn137HKOYTTNYs280mNDOLluuTnFB0bdSkGTrTcKO2zeJmwgfp1edyK2TKItGXsueo+iYdvbNnzs8SDJ8tKBEyP3dMkPcygP5dXXuneEPCVBQdy09W0lcXtQYbhMo8QwDtFXDpZYLfK86+P9W5sgSf30uIqTKdy8NO58HufXFsVKB256CQuYclIdoAOcHmvuIHiYcU/TvfShClVm3lAwo+BgrfElD93/gUa6nVUL4GRwnOAfYG7qSw52UvrPUCKb2uThiQc463KbDJBl1yllQnibso2iOiSeuqr4+018h7Oo0ePkv0gM3dFPThe9Ul8zjBWgpIk=) 2026-04-10 00:21:17.341640 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFeb/S5ks0ARarwg3bRZ66mM4g5dS7fqymRHWS3gTz/k) 2026-04-10 00:21:17.341651 | orchestrator | 2026-04-10 00:21:17.341660 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-10 00:21:17.341682 | orchestrator | Friday 10 April 2026 00:21:14 +0000 (0:00:01.056) 0:00:24.779 ********** 2026-04-10 00:21:17.341691 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJezVIkuvvBYBX/sOz2gtjUlhrrP6VfISSfLsKxz11+X) 2026-04-10 00:21:17.341701 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCIqCdH8lJiSc4sJpUnT5pUo3zPZbiEQqhr1Oq3Udozwsjdgl8bjSIxU72gQtzuj81wRnbex2K2f/Gk9reNy1uxzexShJqTSj2m6erxo/8/hRTFrfH8ujnG3R1qpmvg+rbG3TTf/LpGcSdREP7DuIpksnIVjxX5wQkONrSsNekjGH+hfygUhvo8/I6fo+lO9M970KIcggZCXZCFuDFQ/AV5kBAJN/48H3Jpvn0S68ZnczJs1bWusRQ+455DspSJ3aiPRLXG/rWt+hNtC4asikIY/+soQwL5/NCRHzKdJks/+xTQVPGAurea9CjFm1pJvwpYGT82W3A9SqgVABRqj54GNIb+NBqTbb1HLnwQ+rM78LFC6oijTsZsay/RTzLrx/juiVHisx+/YqjC6T02dG+lYdUsHgD7c/V8XT0CicPkddZtHg01OqH8/smhcWYpe7SR0elGkJp1ZAqAghOHmRWzZ0EHA7jLjm0ebxMdn+8YaV/fkpKxMstMwKC6/1bEdIE=) 2026-04-10 00:21:17.341710 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNeMM9Qn6ZExR/6GfIjIB5eG2HbrhmylR82nT/Zb9T/LFtG+S2W2EyEdIbGnxmkBTK25jrclT0qTaTYsUbOWaUE=) 2026-04-10 00:21:17.341719 | orchestrator | 2026-04-10 00:21:17.341728 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-10 00:21:17.341737 | orchestrator | Friday 10 April 2026 00:21:15 +0000 (0:00:01.067) 0:00:25.847 ********** 2026-04-10 00:21:17.341746 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfAhv5aA09qb/btQDf8DQuNCIgBsIQ5vvHWsg2RFwZSWAnZRQY0of8rb7REgo7AdhTaLM9Rkf2gWA5ra0ilPYG734VsCo15wP2PYFxSHXR5Caemmuvp3NhIVMU4WSI9XsObrOx7GPRS+djrctMn8uKyZsg3yayLFJD/0jPS3Qhygb+9RsKaQb2+4Id6eqAo386eb+VHgKuE4AImfxfS5kUBosYAgFFdOd4B3aDUyA1F4C+kgKGLwiuulAkO0qkz+HQry6mcjlTIH6bWvnASIlajXqm3mcime/QsPYAByDvIA2z6330wk93wP0TZ+3qciMEkt1Jx1I+1QEmpd7cm3X9NntS5c0lHRLXmvvA/uuoKY5dgH2Bxh1IoalxvWN3LhLx+llStk1pLcjYq7oVl7Iq/4MwjQOZ1UtlbTFHp6v1xMOYlQHxQbCJXldkbUJxpXDLfLLrGOXXCspUD87e/71sy+gq1bh/fQPZFsbqK1pbOXGt4/EB43UDHEFVGCW3sUs=) 2026-04-10 00:21:17.341755 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHB49trrMdNJb3bNcOtRGUTWaATZplmezLwGUfz3RlJLyuipD7rmTAhUJ+Nd1nO4zcMEci1xVnVOBDK4YtjfRkM=) 2026-04-10 00:21:17.341764 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFbUuiL4uzu9LhC6FWNR0sPfzDbRDGSJsqHvFaCoD8Ui) 2026-04-10 00:21:17.341772 | orchestrator | 2026-04-10 00:21:17.341781 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-04-10 00:21:17.341790 | orchestrator | Friday 10 April 2026 00:21:16 +0000 (0:00:01.043) 0:00:26.891 ********** 2026-04-10 00:21:17.341799 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-10 00:21:17.341808 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-10 00:21:17.341817 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-10 00:21:17.341825 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-10 00:21:17.341834 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-10 00:21:17.341842 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-10 00:21:17.341851 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-10 00:21:17.341860 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:21:17.341869 | orchestrator | 2026-04-10 00:21:17.341897 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-04-10 00:21:17.341907 | orchestrator | Friday 10 April 2026 00:21:16 +0000 (0:00:00.194) 0:00:27.086 ********** 2026-04-10 00:21:17.341922 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:21:17.341931 | orchestrator | 2026-04-10 00:21:17.341939 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-04-10 00:21:17.341948 | orchestrator | Friday 10 April 2026 00:21:16 +0000 (0:00:00.058) 0:00:27.145 ********** 2026-04-10 00:21:17.341957 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:21:17.341965 | orchestrator | 2026-04-10 00:21:17.341974 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-04-10 00:21:17.341983 | orchestrator | Friday 10 April 2026 00:21:16 +0000 (0:00:00.065) 0:00:27.211 ********** 2026-04-10 00:21:17.341991 | orchestrator | changed: [testbed-manager] 2026-04-10 00:21:17.342000 | orchestrator | 2026-04-10 00:21:17.342008 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:21:17.342098 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-10 00:21:17.342112 | orchestrator | 2026-04-10 00:21:17.342121 | orchestrator | 2026-04-10 00:21:17.342130 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:21:17.342139 | orchestrator | Friday 10 April 2026 00:21:17 +0000 (0:00:00.495) 0:00:27.706 ********** 2026-04-10 00:21:17.342147 | orchestrator | =============================================================================== 2026-04-10 00:21:17.342156 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.66s 2026-04-10 00:21:17.342165 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.16s 2026-04-10 00:21:17.342175 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.28s 2026-04-10 00:21:17.342184 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-04-10 00:21:17.342193 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-04-10 00:21:17.342202 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-04-10 00:21:17.342210 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-04-10 00:21:17.342219 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-04-10 00:21:17.342227 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-04-10 00:21:17.342236 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-04-10 00:21:17.342245 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-04-10 00:21:17.342260 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-04-10 00:21:17.342269 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-04-10 00:21:17.342278 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-04-10 00:21:17.342287 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.93s 2026-04-10 00:21:17.342295 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.90s 2026-04-10 00:21:17.342304 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.50s 2026-04-10 00:21:17.342313 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.19s 2026-04-10 00:21:17.342321 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2026-04-10 00:21:17.342330 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2026-04-10 00:21:17.526360 | orchestrator | + osism apply squid 2026-04-10 00:21:28.885674 | orchestrator | 2026-04-10 00:21:28 | INFO  | Prepare task for execution of squid. 2026-04-10 00:21:28.962563 | orchestrator | 2026-04-10 00:21:28 | INFO  | Task d52244fa-806e-4cbc-a72b-0a30e0c6b66a (squid) was prepared for execution. 2026-04-10 00:21:28.962669 | orchestrator | 2026-04-10 00:21:28 | INFO  | It takes a moment until task d52244fa-806e-4cbc-a72b-0a30e0c6b66a (squid) has been started and output is visible here. 2026-04-10 00:23:25.767451 | orchestrator | 2026-04-10 00:23:25.767556 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-04-10 00:23:25.767570 | orchestrator | 2026-04-10 00:23:25.767581 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-04-10 00:23:25.767591 | orchestrator | Friday 10 April 2026 00:21:32 +0000 (0:00:00.205) 0:00:00.205 ********** 2026-04-10 00:23:25.767602 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-04-10 00:23:25.767613 | orchestrator | 2026-04-10 00:23:25.767623 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-04-10 00:23:25.767632 | orchestrator | Friday 10 April 2026 00:21:32 +0000 (0:00:00.079) 0:00:00.285 ********** 2026-04-10 00:23:25.767642 | orchestrator | ok: [testbed-manager] 2026-04-10 00:23:25.767652 | orchestrator | 2026-04-10 00:23:25.767661 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-04-10 00:23:25.767671 | orchestrator | Friday 10 April 2026 00:21:34 +0000 (0:00:02.407) 0:00:02.692 ********** 2026-04-10 00:23:25.767680 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-04-10 00:23:25.767689 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-04-10 00:23:25.767699 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-04-10 00:23:25.767708 | orchestrator | 2026-04-10 00:23:25.767717 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-04-10 00:23:25.767726 | orchestrator | Friday 10 April 2026 00:21:35 +0000 (0:00:01.244) 0:00:03.937 ********** 2026-04-10 00:23:25.767735 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-04-10 00:23:25.767745 | orchestrator | 2026-04-10 00:23:25.767754 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-04-10 00:23:25.767763 | orchestrator | Friday 10 April 2026 00:21:36 +0000 (0:00:01.012) 0:00:04.950 ********** 2026-04-10 00:23:25.767772 | orchestrator | ok: [testbed-manager] 2026-04-10 00:23:25.767782 | orchestrator | 2026-04-10 00:23:25.767791 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-04-10 00:23:25.767800 | orchestrator | Friday 10 April 2026 00:21:37 +0000 (0:00:00.348) 0:00:05.298 ********** 2026-04-10 00:23:25.767809 | orchestrator | changed: [testbed-manager] 2026-04-10 00:23:25.767818 | orchestrator | 2026-04-10 00:23:25.767827 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-04-10 00:23:25.767837 | orchestrator | Friday 10 April 2026 00:21:38 +0000 (0:00:00.921) 0:00:06.219 ********** 2026-04-10 00:23:25.767846 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-04-10 00:23:25.767856 | orchestrator | ok: [testbed-manager] 2026-04-10 00:23:25.767865 | orchestrator | 2026-04-10 00:23:25.767874 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-04-10 00:23:25.767886 | orchestrator | Friday 10 April 2026 00:22:12 +0000 (0:00:34.597) 0:00:40.817 ********** 2026-04-10 00:23:25.767902 | orchestrator | changed: [testbed-manager] 2026-04-10 00:23:25.767916 | orchestrator | 2026-04-10 00:23:25.767932 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-04-10 00:23:25.767948 | orchestrator | Friday 10 April 2026 00:22:24 +0000 (0:00:12.088) 0:00:52.905 ********** 2026-04-10 00:23:25.767964 | orchestrator | Pausing for 60 seconds 2026-04-10 00:23:25.767980 | orchestrator | changed: [testbed-manager] 2026-04-10 00:23:25.767995 | orchestrator | 2026-04-10 00:23:25.768011 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-04-10 00:23:25.768022 | orchestrator | Friday 10 April 2026 00:23:24 +0000 (0:01:00.078) 0:01:52.984 ********** 2026-04-10 00:23:25.768033 | orchestrator | ok: [testbed-manager] 2026-04-10 00:23:25.768043 | orchestrator | 2026-04-10 00:23:25.768084 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-04-10 00:23:25.768127 | orchestrator | Friday 10 April 2026 00:23:24 +0000 (0:00:00.062) 0:01:53.046 ********** 2026-04-10 00:23:25.768139 | orchestrator | changed: [testbed-manager] 2026-04-10 00:23:25.768153 | orchestrator | 2026-04-10 00:23:25.768168 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:23:25.768182 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:23:25.768196 | orchestrator | 2026-04-10 00:23:25.768210 | orchestrator | 2026-04-10 00:23:25.768225 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:23:25.768241 | orchestrator | Friday 10 April 2026 00:23:25 +0000 (0:00:00.624) 0:01:53.671 ********** 2026-04-10 00:23:25.768256 | orchestrator | =============================================================================== 2026-04-10 00:23:25.768272 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2026-04-10 00:23:25.768282 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 34.60s 2026-04-10 00:23:25.768291 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.09s 2026-04-10 00:23:25.768299 | orchestrator | osism.services.squid : Install required packages ------------------------ 2.41s 2026-04-10 00:23:25.768308 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.24s 2026-04-10 00:23:25.768317 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.01s 2026-04-10 00:23:25.768325 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.92s 2026-04-10 00:23:25.768334 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.62s 2026-04-10 00:23:25.768343 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2026-04-10 00:23:25.768352 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-04-10 00:23:25.768367 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-04-10 00:23:25.944321 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-10 00:23:25.944417 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-04-10 00:23:25.949105 | orchestrator | + set -e 2026-04-10 00:23:25.949135 | orchestrator | + NAMESPACE=kolla 2026-04-10 00:23:25.949148 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-10 00:23:25.952565 | orchestrator | ++ semver latest 9.0.0 2026-04-10 00:23:25.991728 | orchestrator | + [[ -1 -lt 0 ]] 2026-04-10 00:23:25.991756 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-10 00:23:25.991939 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-04-10 00:23:37.222643 | orchestrator | 2026-04-10 00:23:37 | INFO  | Prepare task for execution of operator. 2026-04-10 00:23:37.299538 | orchestrator | 2026-04-10 00:23:37 | INFO  | Task f8ea1bcd-48b5-47ff-846a-b216badb4ae5 (operator) was prepared for execution. 2026-04-10 00:23:37.299639 | orchestrator | 2026-04-10 00:23:37 | INFO  | It takes a moment until task f8ea1bcd-48b5-47ff-846a-b216badb4ae5 (operator) has been started and output is visible here. 2026-04-10 00:23:57.267244 | orchestrator | 2026-04-10 00:23:57.267338 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-04-10 00:23:57.267348 | orchestrator | 2026-04-10 00:23:57.267356 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-10 00:23:57.267364 | orchestrator | Friday 10 April 2026 00:23:40 +0000 (0:00:00.136) 0:00:00.136 ********** 2026-04-10 00:23:57.267371 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:23:57.267381 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:23:57.267388 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:23:57.267395 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:23:57.267402 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:23:57.267408 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:23:57.267418 | orchestrator | 2026-04-10 00:23:57.267426 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-04-10 00:23:57.267450 | orchestrator | Friday 10 April 2026 00:23:43 +0000 (0:00:03.432) 0:00:03.568 ********** 2026-04-10 00:23:57.267457 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:23:57.267464 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:23:57.267471 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:23:57.267478 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:23:57.267485 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:23:57.267491 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:23:57.267498 | orchestrator | 2026-04-10 00:23:57.267505 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-04-10 00:23:57.267511 | orchestrator | 2026-04-10 00:23:57.267518 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-10 00:23:57.267525 | orchestrator | Friday 10 April 2026 00:23:49 +0000 (0:00:05.849) 0:00:09.417 ********** 2026-04-10 00:23:57.267532 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:23:57.267539 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:23:57.267545 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:23:57.267552 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:23:57.267559 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:23:57.267565 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:23:57.267572 | orchestrator | 2026-04-10 00:23:57.267579 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-10 00:23:57.267600 | orchestrator | Friday 10 April 2026 00:23:49 +0000 (0:00:00.165) 0:00:09.583 ********** 2026-04-10 00:23:57.267607 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:23:57.267628 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:23:57.267636 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:23:57.267650 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:23:57.267657 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:23:57.267663 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:23:57.267670 | orchestrator | 2026-04-10 00:23:57.267677 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-10 00:23:57.267684 | orchestrator | Friday 10 April 2026 00:23:49 +0000 (0:00:00.147) 0:00:09.730 ********** 2026-04-10 00:23:57.267691 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:23:57.267698 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:23:57.267705 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:23:57.267711 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:23:57.267718 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:23:57.267725 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:23:57.267731 | orchestrator | 2026-04-10 00:23:57.267738 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-10 00:23:57.267745 | orchestrator | Friday 10 April 2026 00:23:50 +0000 (0:00:00.728) 0:00:10.459 ********** 2026-04-10 00:23:57.267752 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:23:57.267759 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:23:57.267765 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:23:57.267772 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:23:57.267778 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:23:57.267785 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:23:57.267792 | orchestrator | 2026-04-10 00:23:57.267801 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-10 00:23:57.267808 | orchestrator | Friday 10 April 2026 00:23:51 +0000 (0:00:00.925) 0:00:11.385 ********** 2026-04-10 00:23:57.267816 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-04-10 00:23:57.267824 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-04-10 00:23:57.267832 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-04-10 00:23:57.267840 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-04-10 00:23:57.267847 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-04-10 00:23:57.267855 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-04-10 00:23:57.267862 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-04-10 00:23:57.267870 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-04-10 00:23:57.267877 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-04-10 00:23:57.267892 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-04-10 00:23:57.267900 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-04-10 00:23:57.267908 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-04-10 00:23:57.267916 | orchestrator | 2026-04-10 00:23:57.267924 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-10 00:23:57.267931 | orchestrator | Friday 10 April 2026 00:23:52 +0000 (0:00:01.215) 0:00:12.600 ********** 2026-04-10 00:23:57.267939 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:23:57.267947 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:23:57.267955 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:23:57.267962 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:23:57.267969 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:23:57.267977 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:23:57.267985 | orchestrator | 2026-04-10 00:23:57.267992 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-10 00:23:57.268001 | orchestrator | Friday 10 April 2026 00:23:53 +0000 (0:00:01.323) 0:00:13.924 ********** 2026-04-10 00:23:57.268009 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-04-10 00:23:57.268018 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-04-10 00:23:57.268025 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-04-10 00:23:57.268033 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-04-10 00:23:57.268041 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-04-10 00:23:57.268093 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-04-10 00:23:57.268102 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-04-10 00:23:57.268110 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-04-10 00:23:57.268117 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-04-10 00:23:57.268125 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-04-10 00:23:57.268133 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-04-10 00:23:57.268141 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-04-10 00:23:57.268148 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-04-10 00:23:57.268155 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-04-10 00:23:57.268162 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-04-10 00:23:57.268168 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-04-10 00:23:57.268175 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-04-10 00:23:57.268182 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-04-10 00:23:57.268189 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-04-10 00:23:57.268195 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-04-10 00:23:57.268202 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-04-10 00:23:57.268209 | orchestrator | 2026-04-10 00:23:57.268216 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-10 00:23:57.268223 | orchestrator | Friday 10 April 2026 00:23:55 +0000 (0:00:01.229) 0:00:15.154 ********** 2026-04-10 00:23:57.268230 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:23:57.268237 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:23:57.268244 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:23:57.268251 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:23:57.268257 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:23:57.268264 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:23:57.268271 | orchestrator | 2026-04-10 00:23:57.268277 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-10 00:23:57.268291 | orchestrator | Friday 10 April 2026 00:23:55 +0000 (0:00:00.138) 0:00:15.292 ********** 2026-04-10 00:23:57.268297 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:23:57.268304 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:23:57.268311 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:23:57.268317 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:23:57.268324 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:23:57.268331 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:23:57.268337 | orchestrator | 2026-04-10 00:23:57.268344 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-10 00:23:57.268351 | orchestrator | Friday 10 April 2026 00:23:55 +0000 (0:00:00.156) 0:00:15.448 ********** 2026-04-10 00:23:57.268358 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:23:57.268364 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:23:57.268371 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:23:57.268378 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:23:57.268384 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:23:57.268391 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:23:57.268398 | orchestrator | 2026-04-10 00:23:57.268405 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-10 00:23:57.268412 | orchestrator | Friday 10 April 2026 00:23:56 +0000 (0:00:00.674) 0:00:16.122 ********** 2026-04-10 00:23:57.268418 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:23:57.268425 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:23:57.268431 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:23:57.268438 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:23:57.268445 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:23:57.268451 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:23:57.268458 | orchestrator | 2026-04-10 00:23:57.268465 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-10 00:23:57.268471 | orchestrator | Friday 10 April 2026 00:23:56 +0000 (0:00:00.162) 0:00:16.285 ********** 2026-04-10 00:23:57.268478 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-10 00:23:57.268485 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:23:57.268492 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-10 00:23:57.268499 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-10 00:23:57.268506 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-10 00:23:57.268512 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-10 00:23:57.268519 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:23:57.268526 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:23:57.268532 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:23:57.268539 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:23:57.268546 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-10 00:23:57.268552 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:23:57.268559 | orchestrator | 2026-04-10 00:23:57.268566 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-10 00:23:57.268573 | orchestrator | Friday 10 April 2026 00:23:57 +0000 (0:00:00.759) 0:00:17.044 ********** 2026-04-10 00:23:57.268579 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:23:57.268586 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:23:57.268593 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:23:57.268599 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:23:57.268606 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:23:57.268613 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:23:57.268619 | orchestrator | 2026-04-10 00:23:57.268626 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-10 00:23:57.268633 | orchestrator | Friday 10 April 2026 00:23:57 +0000 (0:00:00.135) 0:00:17.179 ********** 2026-04-10 00:23:57.268640 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:23:57.268646 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:23:57.268653 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:23:57.268660 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:23:57.268676 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:23:58.465664 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:23:58.466635 | orchestrator | 2026-04-10 00:23:58.466677 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-10 00:23:58.466696 | orchestrator | Friday 10 April 2026 00:23:57 +0000 (0:00:00.126) 0:00:17.305 ********** 2026-04-10 00:23:58.466716 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:23:58.466735 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:23:58.466753 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:23:58.466771 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:23:58.466789 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:23:58.466808 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:23:58.466825 | orchestrator | 2026-04-10 00:23:58.466843 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-10 00:23:58.466863 | orchestrator | Friday 10 April 2026 00:23:57 +0000 (0:00:00.135) 0:00:17.440 ********** 2026-04-10 00:23:58.466880 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:23:58.466900 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:23:58.466919 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:23:58.466937 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:23:58.466956 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:23:58.466974 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:23:58.466993 | orchestrator | 2026-04-10 00:23:58.467011 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-10 00:23:58.467032 | orchestrator | Friday 10 April 2026 00:23:58 +0000 (0:00:00.651) 0:00:18.092 ********** 2026-04-10 00:23:58.467050 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:23:58.467112 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:23:58.467132 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:23:58.467145 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:23:58.467156 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:23:58.467167 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:23:58.467178 | orchestrator | 2026-04-10 00:23:58.467189 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:23:58.467231 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-10 00:23:58.467245 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-10 00:23:58.467256 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-10 00:23:58.467267 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-10 00:23:58.467278 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-10 00:23:58.467289 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-10 00:23:58.467300 | orchestrator | 2026-04-10 00:23:58.467311 | orchestrator | 2026-04-10 00:23:58.467322 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:23:58.467334 | orchestrator | Friday 10 April 2026 00:23:58 +0000 (0:00:00.214) 0:00:18.307 ********** 2026-04-10 00:23:58.467345 | orchestrator | =============================================================================== 2026-04-10 00:23:58.467356 | orchestrator | Do not require tty for all users ---------------------------------------- 5.85s 2026-04-10 00:23:58.467367 | orchestrator | Gathering Facts --------------------------------------------------------- 3.43s 2026-04-10 00:23:58.467377 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.32s 2026-04-10 00:23:58.467410 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.23s 2026-04-10 00:23:58.467422 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.22s 2026-04-10 00:23:58.467433 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.93s 2026-04-10 00:23:58.467444 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.76s 2026-04-10 00:23:58.467455 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.73s 2026-04-10 00:23:58.467466 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.67s 2026-04-10 00:23:58.467477 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.65s 2026-04-10 00:23:58.467488 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.21s 2026-04-10 00:23:58.467499 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2026-04-10 00:23:58.467510 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.16s 2026-04-10 00:23:58.467521 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.16s 2026-04-10 00:23:58.467532 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.15s 2026-04-10 00:23:58.467543 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.14s 2026-04-10 00:23:58.467554 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2026-04-10 00:23:58.467565 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2026-04-10 00:23:58.467576 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.13s 2026-04-10 00:23:58.636237 | orchestrator | + osism apply --environment custom facts 2026-04-10 00:23:59.855972 | orchestrator | 2026-04-10 00:23:59 | INFO  | Trying to run play facts in environment custom 2026-04-10 00:24:09.920976 | orchestrator | 2026-04-10 00:24:09 | INFO  | Prepare task for execution of facts. 2026-04-10 00:24:10.008838 | orchestrator | 2026-04-10 00:24:10 | INFO  | Task a9b3d289-5c9e-4a19-872a-83ea4e758535 (facts) was prepared for execution. 2026-04-10 00:24:10.008920 | orchestrator | 2026-04-10 00:24:10 | INFO  | It takes a moment until task a9b3d289-5c9e-4a19-872a-83ea4e758535 (facts) has been started and output is visible here. 2026-04-10 00:24:53.337153 | orchestrator | 2026-04-10 00:24:53.337293 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-04-10 00:24:53.337312 | orchestrator | 2026-04-10 00:24:53.337324 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-10 00:24:53.337336 | orchestrator | Friday 10 April 2026 00:24:12 +0000 (0:00:00.092) 0:00:00.092 ********** 2026-04-10 00:24:53.337347 | orchestrator | ok: [testbed-manager] 2026-04-10 00:24:53.337359 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:24:53.337371 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:24:53.337382 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:24:53.337392 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:24:53.337404 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:24:53.337415 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:24:53.337425 | orchestrator | 2026-04-10 00:24:53.337436 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-04-10 00:24:53.337447 | orchestrator | Friday 10 April 2026 00:24:14 +0000 (0:00:01.422) 0:00:01.514 ********** 2026-04-10 00:24:53.337458 | orchestrator | ok: [testbed-manager] 2026-04-10 00:24:53.337469 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:24:53.337480 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:24:53.337491 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:24:53.337502 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:24:53.337514 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:24:53.337541 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:24:53.337552 | orchestrator | 2026-04-10 00:24:53.337587 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-04-10 00:24:53.337599 | orchestrator | 2026-04-10 00:24:53.337610 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-10 00:24:53.337621 | orchestrator | Friday 10 April 2026 00:24:15 +0000 (0:00:01.169) 0:00:02.684 ********** 2026-04-10 00:24:53.337632 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:24:53.337643 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:24:53.337656 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:24:53.337668 | orchestrator | 2026-04-10 00:24:53.337680 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-10 00:24:53.337693 | orchestrator | Friday 10 April 2026 00:24:15 +0000 (0:00:00.074) 0:00:02.758 ********** 2026-04-10 00:24:53.337706 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:24:53.337718 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:24:53.337730 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:24:53.337742 | orchestrator | 2026-04-10 00:24:53.337754 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-10 00:24:53.337767 | orchestrator | Friday 10 April 2026 00:24:15 +0000 (0:00:00.163) 0:00:02.922 ********** 2026-04-10 00:24:53.337777 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:24:53.337788 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:24:53.337799 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:24:53.337810 | orchestrator | 2026-04-10 00:24:53.337821 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-10 00:24:53.337832 | orchestrator | Friday 10 April 2026 00:24:15 +0000 (0:00:00.163) 0:00:03.085 ********** 2026-04-10 00:24:53.337844 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:24:53.337856 | orchestrator | 2026-04-10 00:24:53.337867 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-10 00:24:53.337878 | orchestrator | Friday 10 April 2026 00:24:16 +0000 (0:00:00.099) 0:00:03.185 ********** 2026-04-10 00:24:53.337889 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:24:53.337900 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:24:53.337911 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:24:53.337922 | orchestrator | 2026-04-10 00:24:53.337933 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-10 00:24:53.337944 | orchestrator | Friday 10 April 2026 00:24:16 +0000 (0:00:00.397) 0:00:03.583 ********** 2026-04-10 00:24:53.337955 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:24:53.337966 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:24:53.337977 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:24:53.337995 | orchestrator | 2026-04-10 00:24:53.338013 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-10 00:24:53.338127 | orchestrator | Friday 10 April 2026 00:24:16 +0000 (0:00:00.113) 0:00:03.696 ********** 2026-04-10 00:24:53.338141 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:24:53.338152 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:24:53.338163 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:24:53.338174 | orchestrator | 2026-04-10 00:24:53.338185 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-10 00:24:53.338196 | orchestrator | Friday 10 April 2026 00:24:17 +0000 (0:00:00.995) 0:00:04.691 ********** 2026-04-10 00:24:53.338207 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:24:53.338218 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:24:53.338229 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:24:53.338240 | orchestrator | 2026-04-10 00:24:53.338251 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-10 00:24:53.338262 | orchestrator | Friday 10 April 2026 00:24:18 +0000 (0:00:00.435) 0:00:05.127 ********** 2026-04-10 00:24:53.338273 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:24:53.338284 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:24:53.338295 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:24:53.338306 | orchestrator | 2026-04-10 00:24:53.338327 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-10 00:24:53.338338 | orchestrator | Friday 10 April 2026 00:24:19 +0000 (0:00:01.094) 0:00:06.221 ********** 2026-04-10 00:24:53.338349 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:24:53.338360 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:24:53.338371 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:24:53.338382 | orchestrator | 2026-04-10 00:24:53.338393 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-04-10 00:24:53.338404 | orchestrator | Friday 10 April 2026 00:24:35 +0000 (0:00:16.353) 0:00:22.575 ********** 2026-04-10 00:24:53.338415 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:24:53.338426 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:24:53.338437 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:24:53.338448 | orchestrator | 2026-04-10 00:24:53.338459 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-04-10 00:24:53.338490 | orchestrator | Friday 10 April 2026 00:24:35 +0000 (0:00:00.087) 0:00:22.662 ********** 2026-04-10 00:24:53.338502 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:24:53.338512 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:24:53.338523 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:24:53.338534 | orchestrator | 2026-04-10 00:24:53.338545 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-10 00:24:53.338556 | orchestrator | Friday 10 April 2026 00:24:44 +0000 (0:00:08.888) 0:00:31.551 ********** 2026-04-10 00:24:53.338567 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:24:53.338578 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:24:53.338589 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:24:53.338600 | orchestrator | 2026-04-10 00:24:53.338611 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-10 00:24:53.338622 | orchestrator | Friday 10 April 2026 00:24:44 +0000 (0:00:00.448) 0:00:32.000 ********** 2026-04-10 00:24:53.338633 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-04-10 00:24:53.338644 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-04-10 00:24:53.338655 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-04-10 00:24:53.338666 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-04-10 00:24:53.338677 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-04-10 00:24:53.338688 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-04-10 00:24:53.338699 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-04-10 00:24:53.338710 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-04-10 00:24:53.338721 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-04-10 00:24:53.338732 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-04-10 00:24:53.338743 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-04-10 00:24:53.338754 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-04-10 00:24:53.338764 | orchestrator | 2026-04-10 00:24:53.338776 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-10 00:24:53.338786 | orchestrator | Friday 10 April 2026 00:24:48 +0000 (0:00:03.492) 0:00:35.493 ********** 2026-04-10 00:24:53.338797 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:24:53.338808 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:24:53.338819 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:24:53.338830 | orchestrator | 2026-04-10 00:24:53.338841 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-10 00:24:53.338852 | orchestrator | 2026-04-10 00:24:53.338863 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-10 00:24:53.338917 | orchestrator | Friday 10 April 2026 00:24:49 +0000 (0:00:01.303) 0:00:36.796 ********** 2026-04-10 00:24:53.338928 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:24:53.338947 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:24:53.338958 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:24:53.338969 | orchestrator | ok: [testbed-manager] 2026-04-10 00:24:53.338979 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:24:53.338990 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:24:53.339001 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:24:53.339012 | orchestrator | 2026-04-10 00:24:53.339023 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:24:53.339035 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:24:53.339046 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:24:53.339058 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:24:53.339070 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:24:53.339081 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 00:24:53.339118 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 00:24:53.339130 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 00:24:53.339141 | orchestrator | 2026-04-10 00:24:53.339152 | orchestrator | 2026-04-10 00:24:53.339163 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:24:53.339174 | orchestrator | Friday 10 April 2026 00:24:53 +0000 (0:00:03.636) 0:00:40.433 ********** 2026-04-10 00:24:53.339185 | orchestrator | =============================================================================== 2026-04-10 00:24:53.339196 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.35s 2026-04-10 00:24:53.339207 | orchestrator | Install required packages (Debian) -------------------------------------- 8.89s 2026-04-10 00:24:53.339218 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.64s 2026-04-10 00:24:53.339228 | orchestrator | Copy fact files --------------------------------------------------------- 3.49s 2026-04-10 00:24:53.339239 | orchestrator | Create custom facts directory ------------------------------------------- 1.42s 2026-04-10 00:24:53.339250 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.30s 2026-04-10 00:24:53.339269 | orchestrator | Copy fact file ---------------------------------------------------------- 1.17s 2026-04-10 00:24:53.529476 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.09s 2026-04-10 00:24:53.529579 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.00s 2026-04-10 00:24:53.529593 | orchestrator | Create custom facts directory ------------------------------------------- 0.45s 2026-04-10 00:24:53.529604 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.44s 2026-04-10 00:24:53.529615 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.40s 2026-04-10 00:24:53.529626 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.16s 2026-04-10 00:24:53.529637 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.16s 2026-04-10 00:24:53.529648 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2026-04-10 00:24:53.529659 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.10s 2026-04-10 00:24:53.529670 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.09s 2026-04-10 00:24:53.529702 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.07s 2026-04-10 00:24:53.748143 | orchestrator | + osism apply bootstrap 2026-04-10 00:25:05.114337 | orchestrator | 2026-04-10 00:25:05 | INFO  | Prepare task for execution of bootstrap. 2026-04-10 00:25:05.201511 | orchestrator | 2026-04-10 00:25:05 | INFO  | Task 7b8b0886-d1e6-4f9b-8303-9f54501333bd (bootstrap) was prepared for execution. 2026-04-10 00:25:05.201604 | orchestrator | 2026-04-10 00:25:05 | INFO  | It takes a moment until task 7b8b0886-d1e6-4f9b-8303-9f54501333bd (bootstrap) has been started and output is visible here. 2026-04-10 00:25:20.181824 | orchestrator | 2026-04-10 00:25:20.181937 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-04-10 00:25:20.181954 | orchestrator | 2026-04-10 00:25:20.181966 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-04-10 00:25:20.181978 | orchestrator | Friday 10 April 2026 00:25:08 +0000 (0:00:00.189) 0:00:00.189 ********** 2026-04-10 00:25:20.181989 | orchestrator | ok: [testbed-manager] 2026-04-10 00:25:20.182002 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:25:20.182013 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:25:20.182073 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:25:20.182085 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:25:20.182096 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:25:20.182154 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:25:20.182174 | orchestrator | 2026-04-10 00:25:20.182194 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-10 00:25:20.182211 | orchestrator | 2026-04-10 00:25:20.182229 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-10 00:25:20.182248 | orchestrator | Friday 10 April 2026 00:25:08 +0000 (0:00:00.225) 0:00:00.414 ********** 2026-04-10 00:25:20.182267 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:25:20.182288 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:25:20.182307 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:25:20.182323 | orchestrator | ok: [testbed-manager] 2026-04-10 00:25:20.182334 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:25:20.182345 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:25:20.182357 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:25:20.182370 | orchestrator | 2026-04-10 00:25:20.182382 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-04-10 00:25:20.182394 | orchestrator | 2026-04-10 00:25:20.182407 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-10 00:25:20.182419 | orchestrator | Friday 10 April 2026 00:25:13 +0000 (0:00:04.410) 0:00:04.825 ********** 2026-04-10 00:25:20.182433 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-10 00:25:20.182446 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-10 00:25:20.182458 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-04-10 00:25:20.182470 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-10 00:25:20.182483 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-10 00:25:20.182495 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-10 00:25:20.182508 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-04-10 00:25:20.182521 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-10 00:25:20.182533 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-10 00:25:20.182545 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-10 00:25:20.182557 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-10 00:25:20.182569 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-10 00:25:20.182581 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-10 00:25:20.182593 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-04-10 00:25:20.182606 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-04-10 00:25:20.182619 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-10 00:25:20.182657 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-10 00:25:20.182670 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-10 00:25:20.182682 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-10 00:25:20.182694 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-10 00:25:20.182707 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-10 00:25:20.182718 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-10 00:25:20.182729 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:25:20.182740 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-10 00:25:20.182751 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:25:20.182762 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-04-10 00:25:20.182772 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-10 00:25:20.182783 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-10 00:25:20.182794 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-10 00:25:20.182805 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-10 00:25:20.182815 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-10 00:25:20.182826 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:25:20.182837 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-10 00:25:20.182848 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-04-10 00:25:20.182859 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-10 00:25:20.182869 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-10 00:25:20.182880 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-10 00:25:20.182891 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-10 00:25:20.182902 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-10 00:25:20.182913 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-10 00:25:20.182923 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-10 00:25:20.182934 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-10 00:25:20.182945 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-10 00:25:20.182956 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-10 00:25:20.182967 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-10 00:25:20.182978 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-10 00:25:20.183009 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:25:20.183020 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-10 00:25:20.183031 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-10 00:25:20.183042 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-10 00:25:20.183053 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:25:20.183064 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-10 00:25:20.183074 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-10 00:25:20.183085 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:25:20.183096 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-10 00:25:20.183145 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:25:20.183164 | orchestrator | 2026-04-10 00:25:20.183182 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-04-10 00:25:20.183199 | orchestrator | 2026-04-10 00:25:20.183217 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-04-10 00:25:20.183237 | orchestrator | Friday 10 April 2026 00:25:13 +0000 (0:00:00.424) 0:00:05.250 ********** 2026-04-10 00:25:20.183256 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:25:20.183293 | orchestrator | ok: [testbed-manager] 2026-04-10 00:25:20.183315 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:25:20.183338 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:25:20.183349 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:25:20.183360 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:25:20.183371 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:25:20.183382 | orchestrator | 2026-04-10 00:25:20.183393 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-04-10 00:25:20.183405 | orchestrator | Friday 10 April 2026 00:25:14 +0000 (0:00:01.224) 0:00:06.474 ********** 2026-04-10 00:25:20.183416 | orchestrator | ok: [testbed-manager] 2026-04-10 00:25:20.183427 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:25:20.183452 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:25:20.183463 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:25:20.183474 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:25:20.183495 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:25:20.183507 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:25:20.183517 | orchestrator | 2026-04-10 00:25:20.183528 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-04-10 00:25:20.183540 | orchestrator | Friday 10 April 2026 00:25:15 +0000 (0:00:01.171) 0:00:07.645 ********** 2026-04-10 00:25:20.183552 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:25:20.183565 | orchestrator | 2026-04-10 00:25:20.183576 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-04-10 00:25:20.183587 | orchestrator | Friday 10 April 2026 00:25:16 +0000 (0:00:00.266) 0:00:07.912 ********** 2026-04-10 00:25:20.183598 | orchestrator | changed: [testbed-manager] 2026-04-10 00:25:20.183610 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:25:20.183621 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:25:20.183632 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:25:20.183642 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:25:20.183653 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:25:20.183664 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:25:20.183675 | orchestrator | 2026-04-10 00:25:20.183686 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-04-10 00:25:20.183697 | orchestrator | Friday 10 April 2026 00:25:17 +0000 (0:00:01.533) 0:00:09.445 ********** 2026-04-10 00:25:20.183708 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:25:20.183720 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:25:20.183733 | orchestrator | 2026-04-10 00:25:20.183744 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-04-10 00:25:20.183774 | orchestrator | Friday 10 April 2026 00:25:18 +0000 (0:00:00.292) 0:00:09.737 ********** 2026-04-10 00:25:20.183786 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:25:20.183797 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:25:20.183808 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:25:20.183819 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:25:20.183830 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:25:20.183841 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:25:20.183852 | orchestrator | 2026-04-10 00:25:20.183863 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-04-10 00:25:20.183874 | orchestrator | Friday 10 April 2026 00:25:19 +0000 (0:00:01.042) 0:00:10.779 ********** 2026-04-10 00:25:20.183885 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:25:20.183896 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:25:20.183907 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:25:20.183918 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:25:20.183929 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:25:20.183939 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:25:20.183959 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:25:20.183970 | orchestrator | 2026-04-10 00:25:20.183981 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-04-10 00:25:20.183997 | orchestrator | Friday 10 April 2026 00:25:19 +0000 (0:00:00.584) 0:00:11.364 ********** 2026-04-10 00:25:20.184008 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:25:20.184019 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:25:20.184030 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:25:20.184041 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:25:20.184052 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:25:20.184063 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:25:20.184074 | orchestrator | ok: [testbed-manager] 2026-04-10 00:25:20.184085 | orchestrator | 2026-04-10 00:25:20.184096 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-10 00:25:20.184127 | orchestrator | Friday 10 April 2026 00:25:20 +0000 (0:00:00.419) 0:00:11.783 ********** 2026-04-10 00:25:20.184139 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:25:20.184150 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:25:20.184172 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:25:31.955550 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:25:31.955661 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:25:31.955676 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:25:31.955687 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:25:31.955697 | orchestrator | 2026-04-10 00:25:31.955710 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-10 00:25:31.955722 | orchestrator | Friday 10 April 2026 00:25:20 +0000 (0:00:00.201) 0:00:11.984 ********** 2026-04-10 00:25:31.955733 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:25:31.955759 | orchestrator | 2026-04-10 00:25:31.955770 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-10 00:25:31.955781 | orchestrator | Friday 10 April 2026 00:25:20 +0000 (0:00:00.281) 0:00:12.266 ********** 2026-04-10 00:25:31.955791 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:25:31.955802 | orchestrator | 2026-04-10 00:25:31.955812 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-10 00:25:31.955822 | orchestrator | Friday 10 April 2026 00:25:20 +0000 (0:00:00.286) 0:00:12.552 ********** 2026-04-10 00:25:31.955831 | orchestrator | ok: [testbed-manager] 2026-04-10 00:25:31.955842 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:25:31.955852 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:25:31.955862 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:25:31.955872 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:25:31.955881 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:25:31.955891 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:25:31.955901 | orchestrator | 2026-04-10 00:25:31.955911 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-10 00:25:31.955921 | orchestrator | Friday 10 April 2026 00:25:22 +0000 (0:00:01.402) 0:00:13.955 ********** 2026-04-10 00:25:31.955936 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:25:31.955954 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:25:31.955970 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:25:31.955986 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:25:31.956002 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:25:31.956017 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:25:31.956031 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:25:31.956046 | orchestrator | 2026-04-10 00:25:31.956061 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-10 00:25:31.956107 | orchestrator | Friday 10 April 2026 00:25:22 +0000 (0:00:00.228) 0:00:14.183 ********** 2026-04-10 00:25:31.956179 | orchestrator | ok: [testbed-manager] 2026-04-10 00:25:31.956196 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:25:31.956211 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:25:31.956227 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:25:31.956243 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:25:31.956257 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:25:31.956270 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:25:31.956283 | orchestrator | 2026-04-10 00:25:31.956297 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-10 00:25:31.956310 | orchestrator | Friday 10 April 2026 00:25:23 +0000 (0:00:00.561) 0:00:14.745 ********** 2026-04-10 00:25:31.956325 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:25:31.956339 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:25:31.956355 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:25:31.956370 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:25:31.956385 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:25:31.956400 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:25:31.956414 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:25:31.956429 | orchestrator | 2026-04-10 00:25:31.956445 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-10 00:25:31.956461 | orchestrator | Friday 10 April 2026 00:25:23 +0000 (0:00:00.268) 0:00:15.013 ********** 2026-04-10 00:25:31.956476 | orchestrator | ok: [testbed-manager] 2026-04-10 00:25:31.956491 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:25:31.956507 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:25:31.956522 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:25:31.956538 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:25:31.956554 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:25:31.956570 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:25:31.956587 | orchestrator | 2026-04-10 00:25:31.956603 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-10 00:25:31.956620 | orchestrator | Friday 10 April 2026 00:25:23 +0000 (0:00:00.546) 0:00:15.560 ********** 2026-04-10 00:25:31.956637 | orchestrator | ok: [testbed-manager] 2026-04-10 00:25:31.956653 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:25:31.956670 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:25:31.956682 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:25:31.956691 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:25:31.956701 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:25:31.956710 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:25:31.956720 | orchestrator | 2026-04-10 00:25:31.956743 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-10 00:25:31.956753 | orchestrator | Friday 10 April 2026 00:25:25 +0000 (0:00:01.173) 0:00:16.734 ********** 2026-04-10 00:25:31.956763 | orchestrator | ok: [testbed-manager] 2026-04-10 00:25:31.956773 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:25:31.956783 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:25:31.956793 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:25:31.956807 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:25:31.956823 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:25:31.956835 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:25:31.956845 | orchestrator | 2026-04-10 00:25:31.956855 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-10 00:25:31.956866 | orchestrator | Friday 10 April 2026 00:25:26 +0000 (0:00:01.056) 0:00:17.790 ********** 2026-04-10 00:25:31.956898 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:25:31.956910 | orchestrator | 2026-04-10 00:25:31.956920 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-10 00:25:31.956930 | orchestrator | Friday 10 April 2026 00:25:26 +0000 (0:00:00.299) 0:00:18.089 ********** 2026-04-10 00:25:31.956954 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:25:31.956964 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:25:31.956974 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:25:31.956983 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:25:31.956993 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:25:31.957003 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:25:31.957012 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:25:31.957024 | orchestrator | 2026-04-10 00:25:31.957041 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-10 00:25:31.957054 | orchestrator | Friday 10 April 2026 00:25:27 +0000 (0:00:01.223) 0:00:19.313 ********** 2026-04-10 00:25:31.957080 | orchestrator | ok: [testbed-manager] 2026-04-10 00:25:31.957096 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:25:31.957139 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:25:31.957155 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:25:31.957172 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:25:31.957187 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:25:31.957200 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:25:31.957216 | orchestrator | 2026-04-10 00:25:31.957231 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-10 00:25:31.957246 | orchestrator | Friday 10 April 2026 00:25:27 +0000 (0:00:00.239) 0:00:19.552 ********** 2026-04-10 00:25:31.957261 | orchestrator | ok: [testbed-manager] 2026-04-10 00:25:31.957278 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:25:31.957294 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:25:31.957310 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:25:31.957326 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:25:31.957343 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:25:31.957360 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:25:31.957377 | orchestrator | 2026-04-10 00:25:31.957393 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-10 00:25:31.957408 | orchestrator | Friday 10 April 2026 00:25:28 +0000 (0:00:00.226) 0:00:19.779 ********** 2026-04-10 00:25:31.957418 | orchestrator | ok: [testbed-manager] 2026-04-10 00:25:31.957428 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:25:31.957438 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:25:31.957447 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:25:31.957457 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:25:31.957466 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:25:31.957476 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:25:31.957486 | orchestrator | 2026-04-10 00:25:31.957496 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-10 00:25:31.957506 | orchestrator | Friday 10 April 2026 00:25:28 +0000 (0:00:00.239) 0:00:20.018 ********** 2026-04-10 00:25:31.957517 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:25:31.957529 | orchestrator | 2026-04-10 00:25:31.957539 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-10 00:25:31.957550 | orchestrator | Friday 10 April 2026 00:25:28 +0000 (0:00:00.274) 0:00:20.292 ********** 2026-04-10 00:25:31.957567 | orchestrator | ok: [testbed-manager] 2026-04-10 00:25:31.957592 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:25:31.957609 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:25:31.957624 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:25:31.957639 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:25:31.957655 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:25:31.957671 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:25:31.957686 | orchestrator | 2026-04-10 00:25:31.957700 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-10 00:25:31.957715 | orchestrator | Friday 10 April 2026 00:25:29 +0000 (0:00:00.514) 0:00:20.807 ********** 2026-04-10 00:25:31.957731 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:25:31.957747 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:25:31.957779 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:25:31.957789 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:25:31.957799 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:25:31.957808 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:25:31.957818 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:25:31.957828 | orchestrator | 2026-04-10 00:25:31.957838 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-10 00:25:31.957848 | orchestrator | Friday 10 April 2026 00:25:29 +0000 (0:00:00.245) 0:00:21.052 ********** 2026-04-10 00:25:31.957860 | orchestrator | ok: [testbed-manager] 2026-04-10 00:25:31.957881 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:25:31.957904 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:25:31.957921 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:25:31.957937 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:25:31.957952 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:25:31.957966 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:25:31.957984 | orchestrator | 2026-04-10 00:25:31.958000 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-10 00:25:31.958100 | orchestrator | Friday 10 April 2026 00:25:30 +0000 (0:00:01.018) 0:00:22.071 ********** 2026-04-10 00:25:31.958150 | orchestrator | ok: [testbed-manager] 2026-04-10 00:25:31.958167 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:25:31.958184 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:25:31.958200 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:25:31.958217 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:25:31.958228 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:25:31.958245 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:25:31.958261 | orchestrator | 2026-04-10 00:25:31.958278 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-10 00:25:31.958296 | orchestrator | Friday 10 April 2026 00:25:30 +0000 (0:00:00.569) 0:00:22.640 ********** 2026-04-10 00:25:31.958313 | orchestrator | ok: [testbed-manager] 2026-04-10 00:25:31.958331 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:25:31.958348 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:25:31.958358 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:25:31.958393 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:26:13.301383 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:26:13.302353 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:26:13.302427 | orchestrator | 2026-04-10 00:26:13.302443 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-10 00:26:13.302456 | orchestrator | Friday 10 April 2026 00:25:32 +0000 (0:00:01.076) 0:00:23.717 ********** 2026-04-10 00:26:13.302467 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:26:13.302477 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:26:13.302487 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:26:13.302497 | orchestrator | changed: [testbed-manager] 2026-04-10 00:26:13.302507 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:26:13.302517 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:26:13.302527 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:26:13.302537 | orchestrator | 2026-04-10 00:26:13.302547 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-04-10 00:26:13.302557 | orchestrator | Friday 10 April 2026 00:25:49 +0000 (0:00:17.060) 0:00:40.777 ********** 2026-04-10 00:26:13.302568 | orchestrator | ok: [testbed-manager] 2026-04-10 00:26:13.302578 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:26:13.302587 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:26:13.302597 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:26:13.302607 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:26:13.302617 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:26:13.302626 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:26:13.302636 | orchestrator | 2026-04-10 00:26:13.302646 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-04-10 00:26:13.302656 | orchestrator | Friday 10 April 2026 00:25:49 +0000 (0:00:00.213) 0:00:40.990 ********** 2026-04-10 00:26:13.302666 | orchestrator | ok: [testbed-manager] 2026-04-10 00:26:13.302703 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:26:13.302713 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:26:13.302723 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:26:13.302732 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:26:13.302742 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:26:13.302751 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:26:13.302761 | orchestrator | 2026-04-10 00:26:13.302771 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-04-10 00:26:13.302781 | orchestrator | Friday 10 April 2026 00:25:49 +0000 (0:00:00.213) 0:00:41.204 ********** 2026-04-10 00:26:13.302790 | orchestrator | ok: [testbed-manager] 2026-04-10 00:26:13.302800 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:26:13.302809 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:26:13.302819 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:26:13.302828 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:26:13.302838 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:26:13.302847 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:26:13.302857 | orchestrator | 2026-04-10 00:26:13.302867 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-04-10 00:26:13.302876 | orchestrator | Friday 10 April 2026 00:25:49 +0000 (0:00:00.202) 0:00:41.407 ********** 2026-04-10 00:26:13.302888 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:26:13.302900 | orchestrator | 2026-04-10 00:26:13.302927 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-04-10 00:26:13.302937 | orchestrator | Friday 10 April 2026 00:25:49 +0000 (0:00:00.268) 0:00:41.675 ********** 2026-04-10 00:26:13.302947 | orchestrator | ok: [testbed-manager] 2026-04-10 00:26:13.302957 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:26:13.302967 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:26:13.302976 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:26:13.302986 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:26:13.302995 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:26:13.303005 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:26:13.303014 | orchestrator | 2026-04-10 00:26:13.303024 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-04-10 00:26:13.303034 | orchestrator | Friday 10 April 2026 00:25:51 +0000 (0:00:01.861) 0:00:43.537 ********** 2026-04-10 00:26:13.303044 | orchestrator | changed: [testbed-manager] 2026-04-10 00:26:13.303054 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:26:13.303064 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:26:13.303073 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:26:13.303083 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:26:13.303093 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:26:13.303102 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:26:13.303112 | orchestrator | 2026-04-10 00:26:13.303122 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-04-10 00:26:13.303132 | orchestrator | Friday 10 April 2026 00:25:52 +0000 (0:00:01.092) 0:00:44.629 ********** 2026-04-10 00:26:13.303166 | orchestrator | ok: [testbed-manager] 2026-04-10 00:26:13.303177 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:26:13.303186 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:26:13.303196 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:26:13.303205 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:26:13.303215 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:26:13.303225 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:26:13.303234 | orchestrator | 2026-04-10 00:26:13.303244 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-04-10 00:26:13.303254 | orchestrator | Friday 10 April 2026 00:25:53 +0000 (0:00:00.983) 0:00:45.612 ********** 2026-04-10 00:26:13.303270 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:26:13.303290 | orchestrator | 2026-04-10 00:26:13.303300 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-04-10 00:26:13.303311 | orchestrator | Friday 10 April 2026 00:25:54 +0000 (0:00:00.335) 0:00:45.948 ********** 2026-04-10 00:26:13.303320 | orchestrator | changed: [testbed-manager] 2026-04-10 00:26:13.303330 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:26:13.303339 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:26:13.303349 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:26:13.303359 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:26:13.303368 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:26:13.303378 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:26:13.303388 | orchestrator | 2026-04-10 00:26:13.303421 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-04-10 00:26:13.303431 | orchestrator | Friday 10 April 2026 00:25:55 +0000 (0:00:01.190) 0:00:47.138 ********** 2026-04-10 00:26:13.303441 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:26:13.303451 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:26:13.303461 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:26:13.303470 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:26:13.303480 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:26:13.303490 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:26:13.303499 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:26:13.303510 | orchestrator | 2026-04-10 00:26:13.303520 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-04-10 00:26:13.303529 | orchestrator | Friday 10 April 2026 00:25:55 +0000 (0:00:00.251) 0:00:47.389 ********** 2026-04-10 00:26:13.303539 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:26:13.303549 | orchestrator | 2026-04-10 00:26:13.303612 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-04-10 00:26:13.303625 | orchestrator | Friday 10 April 2026 00:25:56 +0000 (0:00:00.378) 0:00:47.768 ********** 2026-04-10 00:26:13.303635 | orchestrator | ok: [testbed-manager] 2026-04-10 00:26:13.303645 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:26:13.303655 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:26:13.303664 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:26:13.303674 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:26:13.303684 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:26:13.303693 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:26:13.303703 | orchestrator | 2026-04-10 00:26:13.303743 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-04-10 00:26:13.303755 | orchestrator | Friday 10 April 2026 00:25:57 +0000 (0:00:01.875) 0:00:49.643 ********** 2026-04-10 00:26:13.303765 | orchestrator | changed: [testbed-manager] 2026-04-10 00:26:13.303775 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:26:13.303785 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:26:13.303795 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:26:13.303805 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:26:13.303815 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:26:13.303825 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:26:13.303835 | orchestrator | 2026-04-10 00:26:13.303845 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-04-10 00:26:13.303854 | orchestrator | Friday 10 April 2026 00:25:59 +0000 (0:00:01.199) 0:00:50.843 ********** 2026-04-10 00:26:13.303865 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:26:13.303875 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:26:13.303885 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:26:13.303894 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:26:13.303904 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:26:13.303914 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:26:13.303932 | orchestrator | changed: [testbed-manager] 2026-04-10 00:26:13.303943 | orchestrator | 2026-04-10 00:26:13.303952 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-04-10 00:26:13.303963 | orchestrator | Friday 10 April 2026 00:26:10 +0000 (0:00:11.812) 0:01:02.655 ********** 2026-04-10 00:26:13.303973 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:26:13.303982 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:26:13.303992 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:26:13.304002 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:26:13.304011 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:26:13.304021 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:26:13.304031 | orchestrator | ok: [testbed-manager] 2026-04-10 00:26:13.304040 | orchestrator | 2026-04-10 00:26:13.304050 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-04-10 00:26:13.304060 | orchestrator | Friday 10 April 2026 00:26:11 +0000 (0:00:00.742) 0:01:03.397 ********** 2026-04-10 00:26:13.304070 | orchestrator | ok: [testbed-manager] 2026-04-10 00:26:13.304080 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:26:13.304090 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:26:13.304099 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:26:13.304109 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:26:13.304119 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:26:13.304128 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:26:13.304192 | orchestrator | 2026-04-10 00:26:13.304203 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-04-10 00:26:13.304213 | orchestrator | Friday 10 April 2026 00:26:12 +0000 (0:00:00.900) 0:01:04.298 ********** 2026-04-10 00:26:13.304222 | orchestrator | ok: [testbed-manager] 2026-04-10 00:26:13.304232 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:26:13.304241 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:26:13.304251 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:26:13.304261 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:26:13.304270 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:26:13.304280 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:26:13.304290 | orchestrator | 2026-04-10 00:26:13.304300 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-04-10 00:26:13.304310 | orchestrator | Friday 10 April 2026 00:26:12 +0000 (0:00:00.239) 0:01:04.537 ********** 2026-04-10 00:26:13.304320 | orchestrator | ok: [testbed-manager] 2026-04-10 00:26:13.304329 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:26:13.304339 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:26:13.304348 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:26:13.304364 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:26:13.304374 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:26:13.304385 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:26:13.304402 | orchestrator | 2026-04-10 00:26:13.304418 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-04-10 00:26:13.304435 | orchestrator | Friday 10 April 2026 00:26:13 +0000 (0:00:00.229) 0:01:04.767 ********** 2026-04-10 00:26:13.304451 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:26:13.304467 | orchestrator | 2026-04-10 00:26:13.304496 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-04-10 00:28:42.788710 | orchestrator | Friday 10 April 2026 00:26:13 +0000 (0:00:00.236) 0:01:05.004 ********** 2026-04-10 00:28:42.788815 | orchestrator | ok: [testbed-manager] 2026-04-10 00:28:42.788826 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:28:42.788837 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:28:42.788846 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:28:42.788856 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:28:42.788865 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:28:42.788874 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:28:42.788884 | orchestrator | 2026-04-10 00:28:42.788895 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-04-10 00:28:42.788924 | orchestrator | Friday 10 April 2026 00:26:15 +0000 (0:00:01.860) 0:01:06.864 ********** 2026-04-10 00:28:42.788931 | orchestrator | changed: [testbed-manager] 2026-04-10 00:28:42.788939 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:28:42.788944 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:28:42.788950 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:28:42.788956 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:28:42.788962 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:28:42.788967 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:28:42.788973 | orchestrator | 2026-04-10 00:28:42.788979 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-04-10 00:28:42.788986 | orchestrator | Friday 10 April 2026 00:26:15 +0000 (0:00:00.473) 0:01:07.338 ********** 2026-04-10 00:28:42.788992 | orchestrator | ok: [testbed-manager] 2026-04-10 00:28:42.788998 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:28:42.789003 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:28:42.789009 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:28:42.789015 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:28:42.789020 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:28:42.789026 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:28:42.789031 | orchestrator | 2026-04-10 00:28:42.789037 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-04-10 00:28:42.789043 | orchestrator | Friday 10 April 2026 00:26:15 +0000 (0:00:00.185) 0:01:07.523 ********** 2026-04-10 00:28:42.789049 | orchestrator | ok: [testbed-manager] 2026-04-10 00:28:42.789054 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:28:42.789060 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:28:42.789066 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:28:42.789071 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:28:42.789077 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:28:42.789082 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:28:42.789088 | orchestrator | 2026-04-10 00:28:42.789094 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-04-10 00:28:42.789100 | orchestrator | Friday 10 April 2026 00:26:17 +0000 (0:00:01.199) 0:01:08.723 ********** 2026-04-10 00:28:42.789105 | orchestrator | changed: [testbed-manager] 2026-04-10 00:28:42.789111 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:28:42.789117 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:28:42.789122 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:28:42.789128 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:28:42.789134 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:28:42.789139 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:28:42.789145 | orchestrator | 2026-04-10 00:28:42.789151 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-04-10 00:28:42.789156 | orchestrator | Friday 10 April 2026 00:26:18 +0000 (0:00:01.873) 0:01:10.596 ********** 2026-04-10 00:28:42.789162 | orchestrator | ok: [testbed-manager] 2026-04-10 00:28:42.789168 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:28:42.789173 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:28:42.789179 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:28:42.789185 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:28:42.789191 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:28:42.789196 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:28:42.789202 | orchestrator | 2026-04-10 00:28:42.789208 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-04-10 00:28:42.789214 | orchestrator | Friday 10 April 2026 00:26:21 +0000 (0:00:03.003) 0:01:13.600 ********** 2026-04-10 00:28:42.789219 | orchestrator | ok: [testbed-manager] 2026-04-10 00:28:42.789251 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:28:42.789258 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:28:42.789264 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:28:42.789271 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:28:42.789278 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:28:42.789285 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:28:42.789296 | orchestrator | 2026-04-10 00:28:42.789306 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-04-10 00:28:42.789323 | orchestrator | Friday 10 April 2026 00:27:11 +0000 (0:00:49.869) 0:02:03.470 ********** 2026-04-10 00:28:42.789332 | orchestrator | changed: [testbed-manager] 2026-04-10 00:28:42.789341 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:28:42.789350 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:28:42.789359 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:28:42.789369 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:28:42.789380 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:28:42.789390 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:28:42.789399 | orchestrator | 2026-04-10 00:28:42.789408 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-04-10 00:28:42.789414 | orchestrator | Friday 10 April 2026 00:28:28 +0000 (0:01:16.628) 0:03:20.099 ********** 2026-04-10 00:28:42.789420 | orchestrator | ok: [testbed-manager] 2026-04-10 00:28:42.789425 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:28:42.789431 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:28:42.789437 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:28:42.789443 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:28:42.789449 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:28:42.789455 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:28:42.789461 | orchestrator | 2026-04-10 00:28:42.789467 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-04-10 00:28:42.789473 | orchestrator | Friday 10 April 2026 00:28:30 +0000 (0:00:02.246) 0:03:22.345 ********** 2026-04-10 00:28:42.789479 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:28:42.789485 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:28:42.789490 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:28:42.789496 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:28:42.789502 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:28:42.789508 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:28:42.789513 | orchestrator | changed: [testbed-manager] 2026-04-10 00:28:42.789519 | orchestrator | 2026-04-10 00:28:42.789525 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-04-10 00:28:42.789531 | orchestrator | Friday 10 April 2026 00:28:41 +0000 (0:00:11.077) 0:03:33.423 ********** 2026-04-10 00:28:42.789564 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-04-10 00:28:42.789580 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-04-10 00:28:42.789588 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-04-10 00:28:42.789596 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-10 00:28:42.789608 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-10 00:28:42.789614 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-04-10 00:28:42.789627 | orchestrator | 2026-04-10 00:28:42.789638 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-04-10 00:28:42.789649 | orchestrator | Friday 10 April 2026 00:28:42 +0000 (0:00:00.367) 0:03:33.791 ********** 2026-04-10 00:28:42.789659 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-10 00:28:42.789669 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:28:42.789680 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-10 00:28:42.789690 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:28:42.789700 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-10 00:28:42.789710 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:28:42.789730 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-10 00:28:42.789740 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:28:42.789751 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-10 00:28:42.789761 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-10 00:28:42.789772 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-10 00:28:42.789781 | orchestrator | 2026-04-10 00:28:42.789792 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-04-10 00:28:42.789806 | orchestrator | Friday 10 April 2026 00:28:42 +0000 (0:00:00.631) 0:03:34.423 ********** 2026-04-10 00:28:42.789817 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-10 00:28:42.789829 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-10 00:28:42.789839 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-10 00:28:42.789848 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-10 00:28:42.789858 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-10 00:28:42.789876 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-10 00:28:51.622627 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-10 00:28:51.622709 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-10 00:28:51.622717 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-10 00:28:51.622723 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-10 00:28:51.622729 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:28:51.622736 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-10 00:28:51.622741 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-10 00:28:51.622746 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-10 00:28:51.622764 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-10 00:28:51.622769 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-10 00:28:51.622774 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-10 00:28:51.622778 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-10 00:28:51.622783 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-10 00:28:51.622788 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-10 00:28:51.622792 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-10 00:28:51.622797 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-10 00:28:51.622802 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-10 00:28:51.622806 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-10 00:28:51.622811 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-10 00:28:51.622816 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-10 00:28:51.622820 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-10 00:28:51.622825 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-10 00:28:51.622830 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-10 00:28:51.622835 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-10 00:28:51.622839 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-10 00:28:51.622844 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:28:51.622849 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-10 00:28:51.622853 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-10 00:28:51.622858 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-10 00:28:51.622862 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:28:51.622867 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-10 00:28:51.622872 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-10 00:28:51.622876 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-10 00:28:51.622881 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-10 00:28:51.622885 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-10 00:28:51.622890 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-10 00:28:51.622899 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-10 00:28:51.622904 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:28:51.622909 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-10 00:28:51.622913 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-10 00:28:51.622918 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-10 00:28:51.622926 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-10 00:28:51.622931 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-10 00:28:51.622946 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-10 00:28:51.622951 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-10 00:28:51.622955 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-10 00:28:51.622960 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-10 00:28:51.622965 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-10 00:28:51.622969 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-10 00:28:51.622974 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-10 00:28:51.622978 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-10 00:28:51.622983 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-10 00:28:51.622988 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-10 00:28:51.622992 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-10 00:28:51.622997 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-10 00:28:51.623002 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-10 00:28:51.623006 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-10 00:28:51.623011 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-10 00:28:51.623016 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-10 00:28:51.623020 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-10 00:28:51.623025 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-10 00:28:51.623029 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-10 00:28:51.623034 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-10 00:28:51.623038 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-10 00:28:51.623043 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-10 00:28:51.623048 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-10 00:28:51.623052 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-10 00:28:51.623057 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-10 00:28:51.623062 | orchestrator | 2026-04-10 00:28:51.623067 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-04-10 00:28:51.623072 | orchestrator | Friday 10 April 2026 00:28:49 +0000 (0:00:06.790) 0:03:41.213 ********** 2026-04-10 00:28:51.623076 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-10 00:28:51.623081 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-10 00:28:51.623086 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-10 00:28:51.623090 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-10 00:28:51.623098 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-10 00:28:51.623102 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-10 00:28:51.623107 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-10 00:28:51.623112 | orchestrator | 2026-04-10 00:28:51.623116 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-04-10 00:28:51.623121 | orchestrator | Friday 10 April 2026 00:28:51 +0000 (0:00:01.579) 0:03:42.793 ********** 2026-04-10 00:28:51.623126 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-10 00:28:51.623130 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:28:51.623137 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-10 00:28:51.623142 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:28:51.623147 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-10 00:28:51.623152 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:28:51.623156 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-10 00:28:51.623161 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:28:51.623166 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-10 00:28:51.623170 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-10 00:28:51.623178 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-10 00:29:04.054563 | orchestrator | 2026-04-10 00:29:04.054706 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-04-10 00:29:04.054726 | orchestrator | Friday 10 April 2026 00:28:51 +0000 (0:00:00.573) 0:03:43.366 ********** 2026-04-10 00:29:04.054739 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-10 00:29:04.054751 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:29:04.054764 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-10 00:29:04.054775 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:29:04.054787 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-10 00:29:04.054798 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:29:04.054809 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-10 00:29:04.054820 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:29:04.054831 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-10 00:29:04.054842 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-10 00:29:04.054853 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-10 00:29:04.054864 | orchestrator | 2026-04-10 00:29:04.054876 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-04-10 00:29:04.054887 | orchestrator | Friday 10 April 2026 00:28:52 +0000 (0:00:00.525) 0:03:43.891 ********** 2026-04-10 00:29:04.054926 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-10 00:29:04.054937 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:29:04.054948 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-10 00:29:04.054960 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-10 00:29:04.054971 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:29:04.055007 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-10 00:29:04.055018 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:29:04.055029 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:29:04.055040 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-10 00:29:04.055051 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-10 00:29:04.055062 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-10 00:29:04.055073 | orchestrator | 2026-04-10 00:29:04.055086 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-04-10 00:29:04.055100 | orchestrator | Friday 10 April 2026 00:28:52 +0000 (0:00:00.646) 0:03:44.538 ********** 2026-04-10 00:29:04.055113 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:29:04.055126 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:29:04.055141 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:29:04.055162 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:29:04.055182 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:29:04.055196 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:29:04.055210 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:29:04.055220 | orchestrator | 2026-04-10 00:29:04.055232 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-04-10 00:29:04.055274 | orchestrator | Friday 10 April 2026 00:28:53 +0000 (0:00:00.272) 0:03:44.811 ********** 2026-04-10 00:29:04.055286 | orchestrator | ok: [testbed-manager] 2026-04-10 00:29:04.055298 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:29:04.055309 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:29:04.055320 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:29:04.055331 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:29:04.055342 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:29:04.055353 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:29:04.055363 | orchestrator | 2026-04-10 00:29:04.055374 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-04-10 00:29:04.055385 | orchestrator | Friday 10 April 2026 00:28:58 +0000 (0:00:05.377) 0:03:50.188 ********** 2026-04-10 00:29:04.055396 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-04-10 00:29:04.055408 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:29:04.055419 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-04-10 00:29:04.055430 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-04-10 00:29:04.055441 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:29:04.055452 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:29:04.055463 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-04-10 00:29:04.055474 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-04-10 00:29:04.055484 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:29:04.055496 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-04-10 00:29:04.055516 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:29:04.055533 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:29:04.055551 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-04-10 00:29:04.055567 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:29:04.055585 | orchestrator | 2026-04-10 00:29:04.055602 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-04-10 00:29:04.055622 | orchestrator | Friday 10 April 2026 00:28:58 +0000 (0:00:00.307) 0:03:50.495 ********** 2026-04-10 00:29:04.055641 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-04-10 00:29:04.055660 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-04-10 00:29:04.055679 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-04-10 00:29:04.055713 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-04-10 00:29:04.055734 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-04-10 00:29:04.055746 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-04-10 00:29:04.055768 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-04-10 00:29:04.055779 | orchestrator | 2026-04-10 00:29:04.055790 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-04-10 00:29:04.055801 | orchestrator | Friday 10 April 2026 00:28:59 +0000 (0:00:01.069) 0:03:51.565 ********** 2026-04-10 00:29:04.055813 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:29:04.055827 | orchestrator | 2026-04-10 00:29:04.055838 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-04-10 00:29:04.055849 | orchestrator | Friday 10 April 2026 00:29:00 +0000 (0:00:00.386) 0:03:51.951 ********** 2026-04-10 00:29:04.055860 | orchestrator | ok: [testbed-manager] 2026-04-10 00:29:04.055871 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:29:04.055882 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:29:04.055893 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:29:04.055903 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:29:04.055914 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:29:04.055925 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:29:04.055936 | orchestrator | 2026-04-10 00:29:04.055947 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-04-10 00:29:04.055958 | orchestrator | Friday 10 April 2026 00:29:01 +0000 (0:00:01.399) 0:03:53.350 ********** 2026-04-10 00:29:04.055969 | orchestrator | ok: [testbed-manager] 2026-04-10 00:29:04.055979 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:29:04.055990 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:29:04.056000 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:29:04.056011 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:29:04.056022 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:29:04.056052 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:29:04.056063 | orchestrator | 2026-04-10 00:29:04.056074 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-04-10 00:29:04.056085 | orchestrator | Friday 10 April 2026 00:29:02 +0000 (0:00:00.617) 0:03:53.967 ********** 2026-04-10 00:29:04.056096 | orchestrator | changed: [testbed-manager] 2026-04-10 00:29:04.056107 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:29:04.056118 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:29:04.056129 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:29:04.056140 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:29:04.056151 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:29:04.056161 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:29:04.056172 | orchestrator | 2026-04-10 00:29:04.056183 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-04-10 00:29:04.056194 | orchestrator | Friday 10 April 2026 00:29:02 +0000 (0:00:00.678) 0:03:54.646 ********** 2026-04-10 00:29:04.056204 | orchestrator | ok: [testbed-manager] 2026-04-10 00:29:04.056215 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:29:04.056226 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:29:04.056268 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:29:04.056287 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:29:04.056305 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:29:04.056325 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:29:04.056344 | orchestrator | 2026-04-10 00:29:04.056363 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-04-10 00:29:04.056382 | orchestrator | Friday 10 April 2026 00:29:03 +0000 (0:00:00.561) 0:03:55.207 ********** 2026-04-10 00:29:04.056398 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775779509.7159312, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-10 00:29:04.056428 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775779540.3209689, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-10 00:29:04.056441 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775779529.638291, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-10 00:29:04.056478 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775779526.4423285, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-10 00:29:09.423034 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775779527.011543, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-10 00:29:09.423147 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775779519.4326699, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-10 00:29:09.423162 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775779530.6095786, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-10 00:29:09.423176 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-10 00:29:09.423211 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-10 00:29:09.423332 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-10 00:29:09.423350 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-10 00:29:09.423381 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-10 00:29:09.423393 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-10 00:29:09.423405 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-10 00:29:09.423418 | orchestrator | 2026-04-10 00:29:09.423431 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-04-10 00:29:09.423444 | orchestrator | Friday 10 April 2026 00:29:04 +0000 (0:00:00.958) 0:03:56.166 ********** 2026-04-10 00:29:09.423456 | orchestrator | changed: [testbed-manager] 2026-04-10 00:29:09.423471 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:29:09.423483 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:29:09.423506 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:29:09.423518 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:29:09.423528 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:29:09.423539 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:29:09.423550 | orchestrator | 2026-04-10 00:29:09.423562 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-04-10 00:29:09.423573 | orchestrator | Friday 10 April 2026 00:29:05 +0000 (0:00:01.135) 0:03:57.301 ********** 2026-04-10 00:29:09.423584 | orchestrator | changed: [testbed-manager] 2026-04-10 00:29:09.423595 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:29:09.423605 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:29:09.423617 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:29:09.423629 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:29:09.423641 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:29:09.423653 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:29:09.423666 | orchestrator | 2026-04-10 00:29:09.423678 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-04-10 00:29:09.423690 | orchestrator | Friday 10 April 2026 00:29:06 +0000 (0:00:01.123) 0:03:58.425 ********** 2026-04-10 00:29:09.423703 | orchestrator | changed: [testbed-manager] 2026-04-10 00:29:09.423714 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:29:09.423725 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:29:09.423737 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:29:09.423748 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:29:09.423758 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:29:09.423769 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:29:09.423781 | orchestrator | 2026-04-10 00:29:09.423793 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-04-10 00:29:09.423813 | orchestrator | Friday 10 April 2026 00:29:08 +0000 (0:00:01.299) 0:03:59.725 ********** 2026-04-10 00:29:09.423826 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:29:09.423837 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:29:09.423849 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:29:09.423861 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:29:09.423873 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:29:09.423885 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:29:09.423898 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:29:09.423910 | orchestrator | 2026-04-10 00:29:09.423922 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-04-10 00:29:09.423933 | orchestrator | Friday 10 April 2026 00:29:08 +0000 (0:00:00.255) 0:03:59.981 ********** 2026-04-10 00:29:09.423945 | orchestrator | ok: [testbed-manager] 2026-04-10 00:29:09.423959 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:29:09.423971 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:29:09.423984 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:29:09.423997 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:29:09.424011 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:29:09.424023 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:29:09.424035 | orchestrator | 2026-04-10 00:29:09.424047 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-04-10 00:29:09.424059 | orchestrator | Friday 10 April 2026 00:29:09 +0000 (0:00:00.742) 0:04:00.724 ********** 2026-04-10 00:29:09.424073 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:29:09.424087 | orchestrator | 2026-04-10 00:29:09.424099 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-04-10 00:29:09.424122 | orchestrator | Friday 10 April 2026 00:29:09 +0000 (0:00:00.402) 0:04:01.127 ********** 2026-04-10 00:30:29.551880 | orchestrator | ok: [testbed-manager] 2026-04-10 00:30:29.552002 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:30:29.552020 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:30:29.552032 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:30:29.552069 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:30:29.552081 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:30:29.552091 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:30:29.552103 | orchestrator | 2026-04-10 00:30:29.552117 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-04-10 00:30:29.552129 | orchestrator | Friday 10 April 2026 00:29:18 +0000 (0:00:08.791) 0:04:09.918 ********** 2026-04-10 00:30:29.552140 | orchestrator | ok: [testbed-manager] 2026-04-10 00:30:29.552151 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:30:29.552162 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:30:29.552173 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:30:29.552184 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:30:29.552195 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:30:29.552205 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:30:29.552216 | orchestrator | 2026-04-10 00:30:29.552227 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-04-10 00:30:29.552238 | orchestrator | Friday 10 April 2026 00:29:19 +0000 (0:00:01.308) 0:04:11.227 ********** 2026-04-10 00:30:29.552249 | orchestrator | ok: [testbed-manager] 2026-04-10 00:30:29.552260 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:30:29.552373 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:30:29.552385 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:30:29.552396 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:30:29.552409 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:30:29.552421 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:30:29.552433 | orchestrator | 2026-04-10 00:30:29.552446 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-04-10 00:30:29.552459 | orchestrator | Friday 10 April 2026 00:29:20 +0000 (0:00:01.031) 0:04:12.258 ********** 2026-04-10 00:30:29.552472 | orchestrator | ok: [testbed-manager] 2026-04-10 00:30:29.552485 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:30:29.552497 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:30:29.552509 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:30:29.552522 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:30:29.552534 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:30:29.552547 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:30:29.552559 | orchestrator | 2026-04-10 00:30:29.552572 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-04-10 00:30:29.552586 | orchestrator | Friday 10 April 2026 00:29:20 +0000 (0:00:00.283) 0:04:12.542 ********** 2026-04-10 00:30:29.552598 | orchestrator | ok: [testbed-manager] 2026-04-10 00:30:29.552610 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:30:29.552623 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:30:29.552635 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:30:29.552647 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:30:29.552659 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:30:29.552672 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:30:29.552684 | orchestrator | 2026-04-10 00:30:29.552715 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-04-10 00:30:29.552728 | orchestrator | Friday 10 April 2026 00:29:21 +0000 (0:00:00.292) 0:04:12.834 ********** 2026-04-10 00:30:29.552741 | orchestrator | ok: [testbed-manager] 2026-04-10 00:30:29.552754 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:30:29.552766 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:30:29.552779 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:30:29.552791 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:30:29.552802 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:30:29.552813 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:30:29.552824 | orchestrator | 2026-04-10 00:30:29.552835 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-04-10 00:30:29.552846 | orchestrator | Friday 10 April 2026 00:29:21 +0000 (0:00:00.281) 0:04:13.115 ********** 2026-04-10 00:30:29.552857 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:30:29.552868 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:30:29.552879 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:30:29.552901 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:30:29.552912 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:30:29.552923 | orchestrator | ok: [testbed-manager] 2026-04-10 00:30:29.552934 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:30:29.552945 | orchestrator | 2026-04-10 00:30:29.552956 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-04-10 00:30:29.552967 | orchestrator | Friday 10 April 2026 00:29:25 +0000 (0:00:04.580) 0:04:17.696 ********** 2026-04-10 00:30:29.552980 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:30:29.552994 | orchestrator | 2026-04-10 00:30:29.553005 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-04-10 00:30:29.553016 | orchestrator | Friday 10 April 2026 00:29:26 +0000 (0:00:00.424) 0:04:18.121 ********** 2026-04-10 00:30:29.553027 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-04-10 00:30:29.553038 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-04-10 00:30:29.553049 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:30:29.553060 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-04-10 00:30:29.553072 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-04-10 00:30:29.553083 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-04-10 00:30:29.553093 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-04-10 00:30:29.553104 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:30:29.553115 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-04-10 00:30:29.553126 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-04-10 00:30:29.553137 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:30:29.553148 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-04-10 00:30:29.553159 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-04-10 00:30:29.553170 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:30:29.553181 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-04-10 00:30:29.553192 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-04-10 00:30:29.553221 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:30:29.553233 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:30:29.553244 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-04-10 00:30:29.553255 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-04-10 00:30:29.553283 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:30:29.553294 | orchestrator | 2026-04-10 00:30:29.553306 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-04-10 00:30:29.553317 | orchestrator | Friday 10 April 2026 00:29:26 +0000 (0:00:00.361) 0:04:18.483 ********** 2026-04-10 00:30:29.553328 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:30:29.553340 | orchestrator | 2026-04-10 00:30:29.553351 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-04-10 00:30:29.553362 | orchestrator | Friday 10 April 2026 00:29:27 +0000 (0:00:00.506) 0:04:18.989 ********** 2026-04-10 00:30:29.553373 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-04-10 00:30:29.553384 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-04-10 00:30:29.553395 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:30:29.553406 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-04-10 00:30:29.553436 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:30:29.553447 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-04-10 00:30:29.553459 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:30:29.553478 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-04-10 00:30:29.553489 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:30:29.553500 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:30:29.553511 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-04-10 00:30:29.553522 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:30:29.553533 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-04-10 00:30:29.553544 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:30:29.553555 | orchestrator | 2026-04-10 00:30:29.553566 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-04-10 00:30:29.553576 | orchestrator | Friday 10 April 2026 00:29:27 +0000 (0:00:00.309) 0:04:19.299 ********** 2026-04-10 00:30:29.553588 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:30:29.553599 | orchestrator | 2026-04-10 00:30:29.553610 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-04-10 00:30:29.553620 | orchestrator | Friday 10 April 2026 00:29:27 +0000 (0:00:00.386) 0:04:19.686 ********** 2026-04-10 00:30:29.553631 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:30:29.553642 | orchestrator | changed: [testbed-manager] 2026-04-10 00:30:29.553653 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:30:29.553664 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:30:29.553675 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:30:29.553686 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:30:29.553697 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:30:29.553708 | orchestrator | 2026-04-10 00:30:29.553719 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-04-10 00:30:29.553730 | orchestrator | Friday 10 April 2026 00:30:01 +0000 (0:00:33.245) 0:04:52.931 ********** 2026-04-10 00:30:29.553741 | orchestrator | changed: [testbed-manager] 2026-04-10 00:30:29.553752 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:30:29.553763 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:30:29.553774 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:30:29.553785 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:30:29.553795 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:30:29.553806 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:30:29.553817 | orchestrator | 2026-04-10 00:30:29.553833 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-04-10 00:30:29.553845 | orchestrator | Friday 10 April 2026 00:30:10 +0000 (0:00:09.647) 0:05:02.580 ********** 2026-04-10 00:30:29.553856 | orchestrator | changed: [testbed-manager] 2026-04-10 00:30:29.553867 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:30:29.553878 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:30:29.553889 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:30:29.553900 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:30:29.553910 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:30:29.553921 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:30:29.553932 | orchestrator | 2026-04-10 00:30:29.553944 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-04-10 00:30:29.553954 | orchestrator | Friday 10 April 2026 00:30:20 +0000 (0:00:09.432) 0:05:12.012 ********** 2026-04-10 00:30:29.553966 | orchestrator | ok: [testbed-manager] 2026-04-10 00:30:29.553977 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:30:29.553988 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:30:29.553999 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:30:29.554010 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:30:29.554086 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:30:29.554097 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:30:29.554144 | orchestrator | 2026-04-10 00:30:29.554157 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-04-10 00:30:29.554178 | orchestrator | Friday 10 April 2026 00:30:22 +0000 (0:00:02.121) 0:05:14.134 ********** 2026-04-10 00:30:29.554189 | orchestrator | changed: [testbed-manager] 2026-04-10 00:30:29.554200 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:30:29.554211 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:30:29.554222 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:30:29.554233 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:30:29.554244 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:30:29.554255 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:30:29.554284 | orchestrator | 2026-04-10 00:30:29.554306 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-04-10 00:30:40.996931 | orchestrator | Friday 10 April 2026 00:30:29 +0000 (0:00:07.116) 0:05:21.251 ********** 2026-04-10 00:30:40.997067 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:30:40.997092 | orchestrator | 2026-04-10 00:30:40.997109 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-04-10 00:30:40.997128 | orchestrator | Friday 10 April 2026 00:30:29 +0000 (0:00:00.361) 0:05:21.612 ********** 2026-04-10 00:30:40.997146 | orchestrator | changed: [testbed-manager] 2026-04-10 00:30:40.997163 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:30:40.997179 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:30:40.997194 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:30:40.997209 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:30:40.997225 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:30:40.997241 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:30:40.997257 | orchestrator | 2026-04-10 00:30:40.997324 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-04-10 00:30:40.997339 | orchestrator | Friday 10 April 2026 00:30:30 +0000 (0:00:00.733) 0:05:22.346 ********** 2026-04-10 00:30:40.997356 | orchestrator | ok: [testbed-manager] 2026-04-10 00:30:40.997374 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:30:40.997392 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:30:40.997410 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:30:40.997427 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:30:40.997445 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:30:40.997465 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:30:40.997484 | orchestrator | 2026-04-10 00:30:40.997505 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-04-10 00:30:40.997528 | orchestrator | Friday 10 April 2026 00:30:32 +0000 (0:00:02.012) 0:05:24.359 ********** 2026-04-10 00:30:40.997547 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:30:40.997568 | orchestrator | changed: [testbed-manager] 2026-04-10 00:30:40.997588 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:30:40.997607 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:30:40.997627 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:30:40.997648 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:30:40.997668 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:30:40.997688 | orchestrator | 2026-04-10 00:30:40.997706 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-04-10 00:30:40.997725 | orchestrator | Friday 10 April 2026 00:30:33 +0000 (0:00:00.768) 0:05:25.127 ********** 2026-04-10 00:30:40.997743 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:30:40.997762 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:30:40.997780 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:30:40.997799 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:30:40.997815 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:30:40.997831 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:30:40.997846 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:30:40.997862 | orchestrator | 2026-04-10 00:30:40.997876 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-04-10 00:30:40.997892 | orchestrator | Friday 10 April 2026 00:30:33 +0000 (0:00:00.299) 0:05:25.427 ********** 2026-04-10 00:30:40.997938 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:30:40.997955 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:30:40.997970 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:30:40.997987 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:30:40.998003 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:30:40.998092 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:30:40.998116 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:30:40.998133 | orchestrator | 2026-04-10 00:30:40.998151 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-04-10 00:30:40.998167 | orchestrator | Friday 10 April 2026 00:30:34 +0000 (0:00:00.406) 0:05:25.834 ********** 2026-04-10 00:30:40.998184 | orchestrator | ok: [testbed-manager] 2026-04-10 00:30:40.998203 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:30:40.998219 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:30:40.998232 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:30:40.998242 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:30:40.998251 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:30:40.998420 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:30:40.998457 | orchestrator | 2026-04-10 00:30:40.998466 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-04-10 00:30:40.998475 | orchestrator | Friday 10 April 2026 00:30:34 +0000 (0:00:00.404) 0:05:26.239 ********** 2026-04-10 00:30:40.998483 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:30:40.998491 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:30:40.998499 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:30:40.998507 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:30:40.998515 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:30:40.998523 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:30:40.998530 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:30:40.998538 | orchestrator | 2026-04-10 00:30:40.998547 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-04-10 00:30:40.998556 | orchestrator | Friday 10 April 2026 00:30:34 +0000 (0:00:00.291) 0:05:26.531 ********** 2026-04-10 00:30:40.998563 | orchestrator | ok: [testbed-manager] 2026-04-10 00:30:40.998571 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:30:40.998579 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:30:40.998587 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:30:40.998595 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:30:40.998603 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:30:40.998610 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:30:40.998618 | orchestrator | 2026-04-10 00:30:40.998626 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-04-10 00:30:40.998634 | orchestrator | Friday 10 April 2026 00:30:35 +0000 (0:00:00.335) 0:05:26.866 ********** 2026-04-10 00:30:40.998642 | orchestrator | ok: [testbed-manager] =>  2026-04-10 00:30:40.998650 | orchestrator |  docker_version: 5:27.5.1 2026-04-10 00:30:40.998658 | orchestrator | ok: [testbed-node-0] =>  2026-04-10 00:30:40.998666 | orchestrator |  docker_version: 5:27.5.1 2026-04-10 00:30:40.998674 | orchestrator | ok: [testbed-node-1] =>  2026-04-10 00:30:40.998682 | orchestrator |  docker_version: 5:27.5.1 2026-04-10 00:30:40.998690 | orchestrator | ok: [testbed-node-2] =>  2026-04-10 00:30:40.998698 | orchestrator |  docker_version: 5:27.5.1 2026-04-10 00:30:40.998726 | orchestrator | ok: [testbed-node-3] =>  2026-04-10 00:30:40.998734 | orchestrator |  docker_version: 5:27.5.1 2026-04-10 00:30:40.998742 | orchestrator | ok: [testbed-node-4] =>  2026-04-10 00:30:40.998749 | orchestrator |  docker_version: 5:27.5.1 2026-04-10 00:30:40.998757 | orchestrator | ok: [testbed-node-5] =>  2026-04-10 00:30:40.998765 | orchestrator |  docker_version: 5:27.5.1 2026-04-10 00:30:40.998773 | orchestrator | 2026-04-10 00:30:40.998781 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-04-10 00:30:40.998789 | orchestrator | Friday 10 April 2026 00:30:35 +0000 (0:00:00.279) 0:05:27.145 ********** 2026-04-10 00:30:40.998797 | orchestrator | ok: [testbed-manager] =>  2026-04-10 00:30:40.998815 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-10 00:30:40.998823 | orchestrator | ok: [testbed-node-0] =>  2026-04-10 00:30:40.998831 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-10 00:30:40.998839 | orchestrator | ok: [testbed-node-1] =>  2026-04-10 00:30:40.998847 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-10 00:30:40.998855 | orchestrator | ok: [testbed-node-2] =>  2026-04-10 00:30:40.998862 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-10 00:30:40.998870 | orchestrator | ok: [testbed-node-3] =>  2026-04-10 00:30:40.998878 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-10 00:30:40.998886 | orchestrator | ok: [testbed-node-4] =>  2026-04-10 00:30:40.998894 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-10 00:30:40.998901 | orchestrator | ok: [testbed-node-5] =>  2026-04-10 00:30:40.998909 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-10 00:30:40.998917 | orchestrator | 2026-04-10 00:30:40.998925 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-04-10 00:30:40.998933 | orchestrator | Friday 10 April 2026 00:30:35 +0000 (0:00:00.283) 0:05:27.429 ********** 2026-04-10 00:30:40.998941 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:30:40.998949 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:30:40.998957 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:30:40.998965 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:30:40.998972 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:30:40.998980 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:30:40.998988 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:30:40.998996 | orchestrator | 2026-04-10 00:30:40.999004 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-04-10 00:30:40.999011 | orchestrator | Friday 10 April 2026 00:30:35 +0000 (0:00:00.269) 0:05:27.698 ********** 2026-04-10 00:30:40.999019 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:30:40.999027 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:30:40.999035 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:30:40.999043 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:30:40.999050 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:30:40.999058 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:30:40.999066 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:30:40.999074 | orchestrator | 2026-04-10 00:30:40.999082 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-04-10 00:30:40.999090 | orchestrator | Friday 10 April 2026 00:30:36 +0000 (0:00:00.242) 0:05:27.941 ********** 2026-04-10 00:30:40.999099 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:30:40.999109 | orchestrator | 2026-04-10 00:30:40.999117 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-04-10 00:30:40.999125 | orchestrator | Friday 10 April 2026 00:30:36 +0000 (0:00:00.400) 0:05:28.341 ********** 2026-04-10 00:30:40.999133 | orchestrator | ok: [testbed-manager] 2026-04-10 00:30:40.999141 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:30:40.999149 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:30:40.999157 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:30:40.999164 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:30:40.999172 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:30:40.999180 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:30:40.999188 | orchestrator | 2026-04-10 00:30:40.999195 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-04-10 00:30:40.999203 | orchestrator | Friday 10 April 2026 00:30:37 +0000 (0:00:00.884) 0:05:29.226 ********** 2026-04-10 00:30:40.999211 | orchestrator | ok: [testbed-manager] 2026-04-10 00:30:40.999225 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:30:40.999233 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:30:40.999241 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:30:40.999249 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:30:40.999262 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:30:40.999304 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:30:40.999319 | orchestrator | 2026-04-10 00:30:40.999333 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-04-10 00:30:40.999348 | orchestrator | Friday 10 April 2026 00:30:40 +0000 (0:00:03.117) 0:05:32.343 ********** 2026-04-10 00:30:40.999362 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-04-10 00:30:40.999371 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-04-10 00:30:40.999379 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-04-10 00:30:40.999387 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-04-10 00:30:40.999396 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-04-10 00:30:40.999404 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-04-10 00:30:40.999412 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:30:40.999420 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-04-10 00:30:40.999427 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-04-10 00:30:40.999435 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:30:40.999444 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-04-10 00:30:40.999452 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-04-10 00:30:40.999460 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-04-10 00:30:40.999467 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-04-10 00:30:40.999476 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:30:40.999484 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-04-10 00:30:40.999499 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-04-10 00:31:45.095623 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:31:45.095757 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-04-10 00:31:45.095780 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-04-10 00:31:45.095796 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-04-10 00:31:45.095809 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-04-10 00:31:45.095821 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:31:45.095830 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:31:45.095838 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-04-10 00:31:45.095847 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-04-10 00:31:45.095855 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-04-10 00:31:45.095863 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:31:45.095872 | orchestrator | 2026-04-10 00:31:45.095882 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-04-10 00:31:45.095896 | orchestrator | Friday 10 April 2026 00:30:41 +0000 (0:00:00.565) 0:05:32.909 ********** 2026-04-10 00:31:45.095910 | orchestrator | ok: [testbed-manager] 2026-04-10 00:31:45.095923 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:31:45.095937 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:31:45.095950 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:31:45.095960 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:31:45.095971 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:31:45.095983 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:31:45.095997 | orchestrator | 2026-04-10 00:31:45.096011 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-04-10 00:31:45.096024 | orchestrator | Friday 10 April 2026 00:30:48 +0000 (0:00:07.137) 0:05:40.047 ********** 2026-04-10 00:31:45.096038 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:31:45.096051 | orchestrator | ok: [testbed-manager] 2026-04-10 00:31:45.096064 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:31:45.096072 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:31:45.096080 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:31:45.096088 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:31:45.096118 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:31:45.096126 | orchestrator | 2026-04-10 00:31:45.096135 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-04-10 00:31:45.096143 | orchestrator | Friday 10 April 2026 00:30:49 +0000 (0:00:01.075) 0:05:41.123 ********** 2026-04-10 00:31:45.096151 | orchestrator | ok: [testbed-manager] 2026-04-10 00:31:45.096159 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:31:45.096167 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:31:45.096175 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:31:45.096182 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:31:45.096190 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:31:45.096198 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:31:45.096206 | orchestrator | 2026-04-10 00:31:45.096214 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-04-10 00:31:45.096222 | orchestrator | Friday 10 April 2026 00:30:57 +0000 (0:00:08.578) 0:05:49.701 ********** 2026-04-10 00:31:45.096231 | orchestrator | changed: [testbed-manager] 2026-04-10 00:31:45.096239 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:31:45.096247 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:31:45.096255 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:31:45.096263 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:31:45.096299 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:31:45.096310 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:31:45.096318 | orchestrator | 2026-04-10 00:31:45.096326 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-04-10 00:31:45.096335 | orchestrator | Friday 10 April 2026 00:31:01 +0000 (0:00:03.613) 0:05:53.315 ********** 2026-04-10 00:31:45.096343 | orchestrator | ok: [testbed-manager] 2026-04-10 00:31:45.096351 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:31:45.096359 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:31:45.096367 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:31:45.096374 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:31:45.096383 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:31:45.096391 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:31:45.096399 | orchestrator | 2026-04-10 00:31:45.096407 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-04-10 00:31:45.096429 | orchestrator | Friday 10 April 2026 00:31:03 +0000 (0:00:01.433) 0:05:54.749 ********** 2026-04-10 00:31:45.096438 | orchestrator | ok: [testbed-manager] 2026-04-10 00:31:45.096446 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:31:45.096454 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:31:45.096462 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:31:45.096470 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:31:45.096478 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:31:45.096487 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:31:45.096495 | orchestrator | 2026-04-10 00:31:45.096503 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-04-10 00:31:45.096511 | orchestrator | Friday 10 April 2026 00:31:04 +0000 (0:00:01.371) 0:05:56.121 ********** 2026-04-10 00:31:45.096519 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:31:45.096527 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:31:45.096536 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:31:45.096544 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:31:45.096552 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:31:45.096560 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:31:45.096568 | orchestrator | changed: [testbed-manager] 2026-04-10 00:31:45.096576 | orchestrator | 2026-04-10 00:31:45.096585 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-04-10 00:31:45.096593 | orchestrator | Friday 10 April 2026 00:31:04 +0000 (0:00:00.582) 0:05:56.704 ********** 2026-04-10 00:31:45.096601 | orchestrator | ok: [testbed-manager] 2026-04-10 00:31:45.096609 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:31:45.096617 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:31:45.096632 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:31:45.096640 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:31:45.096648 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:31:45.096656 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:31:45.096664 | orchestrator | 2026-04-10 00:31:45.096672 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-04-10 00:31:45.096698 | orchestrator | Friday 10 April 2026 00:31:15 +0000 (0:00:10.470) 0:06:07.174 ********** 2026-04-10 00:31:45.096707 | orchestrator | changed: [testbed-manager] 2026-04-10 00:31:45.096715 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:31:45.096723 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:31:45.096731 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:31:45.096739 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:31:45.096750 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:31:45.096764 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:31:45.096777 | orchestrator | 2026-04-10 00:31:45.096790 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-04-10 00:31:45.096802 | orchestrator | Friday 10 April 2026 00:31:16 +0000 (0:00:01.133) 0:06:08.307 ********** 2026-04-10 00:31:45.096815 | orchestrator | ok: [testbed-manager] 2026-04-10 00:31:45.096827 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:31:45.096839 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:31:45.096851 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:31:45.096863 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:31:45.096875 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:31:45.096887 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:31:45.096900 | orchestrator | 2026-04-10 00:31:45.096914 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-04-10 00:31:45.096926 | orchestrator | Friday 10 April 2026 00:31:26 +0000 (0:00:09.763) 0:06:18.071 ********** 2026-04-10 00:31:45.096938 | orchestrator | ok: [testbed-manager] 2026-04-10 00:31:45.096950 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:31:45.096963 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:31:45.096975 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:31:45.096986 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:31:45.096999 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:31:45.097013 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:31:45.097026 | orchestrator | 2026-04-10 00:31:45.097038 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-04-10 00:31:45.097049 | orchestrator | Friday 10 April 2026 00:31:37 +0000 (0:00:11.498) 0:06:29.569 ********** 2026-04-10 00:31:45.097061 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-04-10 00:31:45.097074 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-04-10 00:31:45.097089 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-04-10 00:31:45.097101 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-04-10 00:31:45.097112 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-04-10 00:31:45.097124 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-04-10 00:31:45.097137 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-04-10 00:31:45.097148 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-04-10 00:31:45.097160 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-04-10 00:31:45.097173 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-04-10 00:31:45.097186 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-04-10 00:31:45.097200 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-04-10 00:31:45.097213 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-04-10 00:31:45.097228 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-04-10 00:31:45.097242 | orchestrator | 2026-04-10 00:31:45.097255 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-04-10 00:31:45.097290 | orchestrator | Friday 10 April 2026 00:31:39 +0000 (0:00:01.195) 0:06:30.764 ********** 2026-04-10 00:31:45.097305 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:31:45.097331 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:31:45.097345 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:31:45.097359 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:31:45.097372 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:31:45.097387 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:31:45.097400 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:31:45.097414 | orchestrator | 2026-04-10 00:31:45.097425 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-04-10 00:31:45.097433 | orchestrator | Friday 10 April 2026 00:31:39 +0000 (0:00:00.636) 0:06:31.401 ********** 2026-04-10 00:31:45.097441 | orchestrator | ok: [testbed-manager] 2026-04-10 00:31:45.097449 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:31:45.097457 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:31:45.097466 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:31:45.097474 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:31:45.097482 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:31:45.097490 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:31:45.097498 | orchestrator | 2026-04-10 00:31:45.097506 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-04-10 00:31:45.097515 | orchestrator | Friday 10 April 2026 00:31:44 +0000 (0:00:04.672) 0:06:36.074 ********** 2026-04-10 00:31:45.097523 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:31:45.097531 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:31:45.097541 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:31:45.097554 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:31:45.097573 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:31:45.097587 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:31:45.097599 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:31:45.097612 | orchestrator | 2026-04-10 00:31:45.097674 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-04-10 00:31:45.097691 | orchestrator | Friday 10 April 2026 00:31:44 +0000 (0:00:00.474) 0:06:36.548 ********** 2026-04-10 00:31:45.097704 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-04-10 00:31:45.097720 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-04-10 00:31:45.097728 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:31:45.097736 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-04-10 00:31:45.097744 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-04-10 00:31:45.097752 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:31:45.097760 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-04-10 00:31:45.097768 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-04-10 00:31:45.097776 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:31:45.097797 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-04-10 00:32:04.539391 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-04-10 00:32:04.539508 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:32:04.539519 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-04-10 00:32:04.539528 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-04-10 00:32:04.539535 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:32:04.539543 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-04-10 00:32:04.539550 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-04-10 00:32:04.539558 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:32:04.539565 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-04-10 00:32:04.539573 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-04-10 00:32:04.539580 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:32:04.539588 | orchestrator | 2026-04-10 00:32:04.539598 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-04-10 00:32:04.539629 | orchestrator | Friday 10 April 2026 00:31:45 +0000 (0:00:00.508) 0:06:37.057 ********** 2026-04-10 00:32:04.539637 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:32:04.539644 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:32:04.539651 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:32:04.539659 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:32:04.539666 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:32:04.539673 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:32:04.539680 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:32:04.539687 | orchestrator | 2026-04-10 00:32:04.539695 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-04-10 00:32:04.539702 | orchestrator | Friday 10 April 2026 00:31:45 +0000 (0:00:00.469) 0:06:37.526 ********** 2026-04-10 00:32:04.539709 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:32:04.539717 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:32:04.539724 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:32:04.539731 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:32:04.539738 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:32:04.539745 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:32:04.539752 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:32:04.539760 | orchestrator | 2026-04-10 00:32:04.539767 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-04-10 00:32:04.539775 | orchestrator | Friday 10 April 2026 00:31:46 +0000 (0:00:00.627) 0:06:38.154 ********** 2026-04-10 00:32:04.539782 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:32:04.539789 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:32:04.539796 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:32:04.539803 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:32:04.539810 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:32:04.539817 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:32:04.539825 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:32:04.539832 | orchestrator | 2026-04-10 00:32:04.539839 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-04-10 00:32:04.539846 | orchestrator | Friday 10 April 2026 00:31:46 +0000 (0:00:00.495) 0:06:38.649 ********** 2026-04-10 00:32:04.539854 | orchestrator | ok: [testbed-manager] 2026-04-10 00:32:04.539861 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:32:04.539869 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:32:04.539877 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:32:04.539886 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:32:04.539894 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:32:04.539902 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:32:04.539910 | orchestrator | 2026-04-10 00:32:04.539919 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-04-10 00:32:04.539927 | orchestrator | Friday 10 April 2026 00:31:48 +0000 (0:00:01.864) 0:06:40.514 ********** 2026-04-10 00:32:04.539937 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:32:04.539948 | orchestrator | 2026-04-10 00:32:04.539972 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-04-10 00:32:04.539981 | orchestrator | Friday 10 April 2026 00:31:49 +0000 (0:00:00.832) 0:06:41.347 ********** 2026-04-10 00:32:04.539990 | orchestrator | ok: [testbed-manager] 2026-04-10 00:32:04.539999 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:32:04.540008 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:32:04.540016 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:32:04.540035 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:32:04.540044 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:32:04.540060 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:32:04.540069 | orchestrator | 2026-04-10 00:32:04.540077 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-04-10 00:32:04.540093 | orchestrator | Friday 10 April 2026 00:31:50 +0000 (0:00:01.056) 0:06:42.403 ********** 2026-04-10 00:32:04.540101 | orchestrator | ok: [testbed-manager] 2026-04-10 00:32:04.540108 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:32:04.540115 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:32:04.540122 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:32:04.540130 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:32:04.540137 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:32:04.540144 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:32:04.540151 | orchestrator | 2026-04-10 00:32:04.540159 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-04-10 00:32:04.540166 | orchestrator | Friday 10 April 2026 00:31:51 +0000 (0:00:00.874) 0:06:43.278 ********** 2026-04-10 00:32:04.540173 | orchestrator | ok: [testbed-manager] 2026-04-10 00:32:04.540181 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:32:04.540188 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:32:04.540195 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:32:04.540202 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:32:04.540209 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:32:04.540217 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:32:04.540224 | orchestrator | 2026-04-10 00:32:04.540231 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-04-10 00:32:04.540253 | orchestrator | Friday 10 April 2026 00:31:52 +0000 (0:00:01.362) 0:06:44.640 ********** 2026-04-10 00:32:04.540261 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:32:04.540308 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:32:04.540317 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:32:04.540324 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:32:04.540332 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:32:04.540339 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:32:04.540346 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:32:04.540353 | orchestrator | 2026-04-10 00:32:04.540361 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-04-10 00:32:04.540368 | orchestrator | Friday 10 April 2026 00:31:54 +0000 (0:00:01.389) 0:06:46.030 ********** 2026-04-10 00:32:04.540375 | orchestrator | ok: [testbed-manager] 2026-04-10 00:32:04.540382 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:32:04.540390 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:32:04.540397 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:32:04.540404 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:32:04.540411 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:32:04.540419 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:32:04.540426 | orchestrator | 2026-04-10 00:32:04.540433 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-04-10 00:32:04.540440 | orchestrator | Friday 10 April 2026 00:31:55 +0000 (0:00:01.341) 0:06:47.372 ********** 2026-04-10 00:32:04.540448 | orchestrator | changed: [testbed-manager] 2026-04-10 00:32:04.540455 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:32:04.540462 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:32:04.540469 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:32:04.540476 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:32:04.540483 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:32:04.540490 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:32:04.540498 | orchestrator | 2026-04-10 00:32:04.540505 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-04-10 00:32:04.540512 | orchestrator | Friday 10 April 2026 00:31:57 +0000 (0:00:01.770) 0:06:49.142 ********** 2026-04-10 00:32:04.540520 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:32:04.540528 | orchestrator | 2026-04-10 00:32:04.540535 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-04-10 00:32:04.540542 | orchestrator | Friday 10 April 2026 00:31:58 +0000 (0:00:00.861) 0:06:50.004 ********** 2026-04-10 00:32:04.540564 | orchestrator | ok: [testbed-manager] 2026-04-10 00:32:04.540572 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:32:04.540579 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:32:04.540586 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:32:04.540594 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:32:04.540601 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:32:04.540608 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:32:04.540615 | orchestrator | 2026-04-10 00:32:04.540623 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-04-10 00:32:04.540630 | orchestrator | Friday 10 April 2026 00:31:59 +0000 (0:00:01.440) 0:06:51.444 ********** 2026-04-10 00:32:04.540637 | orchestrator | ok: [testbed-manager] 2026-04-10 00:32:04.540644 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:32:04.540652 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:32:04.540659 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:32:04.540666 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:32:04.540673 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:32:04.540680 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:32:04.540688 | orchestrator | 2026-04-10 00:32:04.540695 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-04-10 00:32:04.540702 | orchestrator | Friday 10 April 2026 00:32:01 +0000 (0:00:01.332) 0:06:52.777 ********** 2026-04-10 00:32:04.540710 | orchestrator | ok: [testbed-manager] 2026-04-10 00:32:04.540717 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:32:04.540724 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:32:04.540731 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:32:04.540738 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:32:04.540746 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:32:04.540753 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:32:04.540760 | orchestrator | 2026-04-10 00:32:04.540768 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-04-10 00:32:04.540775 | orchestrator | Friday 10 April 2026 00:32:02 +0000 (0:00:01.141) 0:06:53.919 ********** 2026-04-10 00:32:04.540782 | orchestrator | ok: [testbed-manager] 2026-04-10 00:32:04.540790 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:32:04.540797 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:32:04.540804 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:32:04.540811 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:32:04.540819 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:32:04.540826 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:32:04.540833 | orchestrator | 2026-04-10 00:32:04.540840 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-04-10 00:32:04.540848 | orchestrator | Friday 10 April 2026 00:32:03 +0000 (0:00:01.187) 0:06:55.106 ********** 2026-04-10 00:32:04.540855 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:32:04.540863 | orchestrator | 2026-04-10 00:32:04.540870 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-10 00:32:04.540877 | orchestrator | Friday 10 April 2026 00:32:04 +0000 (0:00:00.853) 0:06:55.959 ********** 2026-04-10 00:32:04.540885 | orchestrator | 2026-04-10 00:32:04.540892 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-10 00:32:04.540899 | orchestrator | Friday 10 April 2026 00:32:04 +0000 (0:00:00.056) 0:06:56.016 ********** 2026-04-10 00:32:04.540907 | orchestrator | 2026-04-10 00:32:04.540914 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-10 00:32:04.540921 | orchestrator | Friday 10 April 2026 00:32:04 +0000 (0:00:00.183) 0:06:56.200 ********** 2026-04-10 00:32:04.540928 | orchestrator | 2026-04-10 00:32:04.540936 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-10 00:32:04.540948 | orchestrator | Friday 10 April 2026 00:32:04 +0000 (0:00:00.040) 0:06:56.240 ********** 2026-04-10 00:32:31.475550 | orchestrator | 2026-04-10 00:32:31.475675 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-10 00:32:31.475716 | orchestrator | Friday 10 April 2026 00:32:04 +0000 (0:00:00.039) 0:06:56.279 ********** 2026-04-10 00:32:31.475729 | orchestrator | 2026-04-10 00:32:31.475741 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-10 00:32:31.475752 | orchestrator | Friday 10 April 2026 00:32:04 +0000 (0:00:00.054) 0:06:56.334 ********** 2026-04-10 00:32:31.475763 | orchestrator | 2026-04-10 00:32:31.475774 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-10 00:32:31.475785 | orchestrator | Friday 10 April 2026 00:32:04 +0000 (0:00:00.037) 0:06:56.372 ********** 2026-04-10 00:32:31.475796 | orchestrator | 2026-04-10 00:32:31.475807 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-10 00:32:31.475818 | orchestrator | Friday 10 April 2026 00:32:04 +0000 (0:00:00.047) 0:06:56.420 ********** 2026-04-10 00:32:31.475829 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:32:31.475841 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:32:31.475852 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:32:31.475863 | orchestrator | 2026-04-10 00:32:31.475874 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-04-10 00:32:31.475885 | orchestrator | Friday 10 April 2026 00:32:06 +0000 (0:00:01.405) 0:06:57.825 ********** 2026-04-10 00:32:31.475896 | orchestrator | changed: [testbed-manager] 2026-04-10 00:32:31.475909 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:32:31.475919 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:32:31.475930 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:32:31.475941 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:32:31.475952 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:32:31.475964 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:32:31.475975 | orchestrator | 2026-04-10 00:32:31.475986 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-04-10 00:32:31.475997 | orchestrator | Friday 10 April 2026 00:32:07 +0000 (0:00:01.575) 0:06:59.400 ********** 2026-04-10 00:32:31.476008 | orchestrator | changed: [testbed-manager] 2026-04-10 00:32:31.476019 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:32:31.476030 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:32:31.476041 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:32:31.476052 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:32:31.476063 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:32:31.476074 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:32:31.476085 | orchestrator | 2026-04-10 00:32:31.476095 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-04-10 00:32:31.476106 | orchestrator | Friday 10 April 2026 00:32:08 +0000 (0:00:01.272) 0:07:00.673 ********** 2026-04-10 00:32:31.476117 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:32:31.476128 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:32:31.476139 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:32:31.476150 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:32:31.476161 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:32:31.476171 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:32:31.476182 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:32:31.476193 | orchestrator | 2026-04-10 00:32:31.476204 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-04-10 00:32:31.476215 | orchestrator | Friday 10 April 2026 00:32:11 +0000 (0:00:02.479) 0:07:03.153 ********** 2026-04-10 00:32:31.476226 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:32:31.476237 | orchestrator | 2026-04-10 00:32:31.476248 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-04-10 00:32:31.476259 | orchestrator | Friday 10 April 2026 00:32:11 +0000 (0:00:00.101) 0:07:03.255 ********** 2026-04-10 00:32:31.476294 | orchestrator | ok: [testbed-manager] 2026-04-10 00:32:31.476306 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:32:31.476317 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:32:31.476328 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:32:31.476339 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:32:31.476358 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:32:31.476369 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:32:31.476380 | orchestrator | 2026-04-10 00:32:31.476391 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-04-10 00:32:31.476419 | orchestrator | Friday 10 April 2026 00:32:12 +0000 (0:00:01.263) 0:07:04.519 ********** 2026-04-10 00:32:31.476431 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:32:31.476442 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:32:31.476456 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:32:31.476474 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:32:31.476492 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:32:31.476504 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:32:31.476515 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:32:31.476526 | orchestrator | 2026-04-10 00:32:31.476538 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-04-10 00:32:31.476549 | orchestrator | Friday 10 April 2026 00:32:13 +0000 (0:00:00.523) 0:07:05.042 ********** 2026-04-10 00:32:31.476561 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:32:31.476575 | orchestrator | 2026-04-10 00:32:31.476587 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-04-10 00:32:31.476598 | orchestrator | Friday 10 April 2026 00:32:14 +0000 (0:00:00.923) 0:07:05.965 ********** 2026-04-10 00:32:31.476609 | orchestrator | ok: [testbed-manager] 2026-04-10 00:32:31.476620 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:32:31.476631 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:32:31.476642 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:32:31.476653 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:32:31.476665 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:32:31.476676 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:32:31.476687 | orchestrator | 2026-04-10 00:32:31.476698 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-04-10 00:32:31.476709 | orchestrator | Friday 10 April 2026 00:32:15 +0000 (0:00:01.031) 0:07:06.997 ********** 2026-04-10 00:32:31.476720 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-04-10 00:32:31.476750 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-04-10 00:32:31.476763 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-04-10 00:32:31.476774 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-04-10 00:32:31.476785 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-04-10 00:32:31.476796 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-04-10 00:32:31.476807 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-04-10 00:32:31.476818 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-04-10 00:32:31.476829 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-04-10 00:32:31.476840 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-04-10 00:32:31.476851 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-04-10 00:32:31.476862 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-04-10 00:32:31.476873 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-04-10 00:32:31.476884 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-04-10 00:32:31.476895 | orchestrator | 2026-04-10 00:32:31.476906 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-04-10 00:32:31.476917 | orchestrator | Friday 10 April 2026 00:32:17 +0000 (0:00:02.460) 0:07:09.457 ********** 2026-04-10 00:32:31.476929 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:32:31.476940 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:32:31.476951 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:32:31.476962 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:32:31.476981 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:32:31.476992 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:32:31.477003 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:32:31.477014 | orchestrator | 2026-04-10 00:32:31.477025 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-04-10 00:32:31.477037 | orchestrator | Friday 10 April 2026 00:32:18 +0000 (0:00:00.498) 0:07:09.956 ********** 2026-04-10 00:32:31.477049 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:32:31.477062 | orchestrator | 2026-04-10 00:32:31.477074 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-04-10 00:32:31.477085 | orchestrator | Friday 10 April 2026 00:32:19 +0000 (0:00:00.990) 0:07:10.946 ********** 2026-04-10 00:32:31.477096 | orchestrator | ok: [testbed-manager] 2026-04-10 00:32:31.477107 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:32:31.477118 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:32:31.477129 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:32:31.477140 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:32:31.477151 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:32:31.477162 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:32:31.477173 | orchestrator | 2026-04-10 00:32:31.477184 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-04-10 00:32:31.477196 | orchestrator | Friday 10 April 2026 00:32:20 +0000 (0:00:00.828) 0:07:11.775 ********** 2026-04-10 00:32:31.477207 | orchestrator | ok: [testbed-manager] 2026-04-10 00:32:31.477218 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:32:31.477229 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:32:31.477240 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:32:31.477251 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:32:31.477262 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:32:31.477291 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:32:31.477303 | orchestrator | 2026-04-10 00:32:31.477314 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-04-10 00:32:31.477325 | orchestrator | Friday 10 April 2026 00:32:20 +0000 (0:00:00.798) 0:07:12.573 ********** 2026-04-10 00:32:31.477336 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:32:31.477347 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:32:31.477358 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:32:31.477374 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:32:31.477386 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:32:31.477397 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:32:31.477407 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:32:31.477418 | orchestrator | 2026-04-10 00:32:31.477430 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-04-10 00:32:31.477441 | orchestrator | Friday 10 April 2026 00:32:21 +0000 (0:00:00.482) 0:07:13.056 ********** 2026-04-10 00:32:31.477452 | orchestrator | ok: [testbed-manager] 2026-04-10 00:32:31.477463 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:32:31.477474 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:32:31.477484 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:32:31.477495 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:32:31.477506 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:32:31.477517 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:32:31.477528 | orchestrator | 2026-04-10 00:32:31.477539 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-04-10 00:32:31.477550 | orchestrator | Friday 10 April 2026 00:32:23 +0000 (0:00:01.690) 0:07:14.747 ********** 2026-04-10 00:32:31.477561 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:32:31.477572 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:32:31.477583 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:32:31.477594 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:32:31.477605 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:32:31.477622 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:32:31.477633 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:32:31.477644 | orchestrator | 2026-04-10 00:32:31.477655 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-04-10 00:32:31.477666 | orchestrator | Friday 10 April 2026 00:32:23 +0000 (0:00:00.687) 0:07:15.434 ********** 2026-04-10 00:32:31.477677 | orchestrator | ok: [testbed-manager] 2026-04-10 00:32:31.477688 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:32:31.477699 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:32:31.477710 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:32:31.477721 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:32:31.477732 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:32:31.477749 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:33:04.142210 | orchestrator | 2026-04-10 00:33:04.142435 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-04-10 00:33:04.142467 | orchestrator | Friday 10 April 2026 00:32:31 +0000 (0:00:07.806) 0:07:23.241 ********** 2026-04-10 00:33:04.142487 | orchestrator | ok: [testbed-manager] 2026-04-10 00:33:04.142506 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:33:04.142594 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:33:04.142607 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:33:04.142617 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:33:04.142627 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:33:04.142637 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:33:04.142647 | orchestrator | 2026-04-10 00:33:04.142660 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-04-10 00:33:04.142672 | orchestrator | Friday 10 April 2026 00:32:32 +0000 (0:00:01.370) 0:07:24.611 ********** 2026-04-10 00:33:04.142683 | orchestrator | ok: [testbed-manager] 2026-04-10 00:33:04.142694 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:33:04.142706 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:33:04.142717 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:33:04.142735 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:33:04.142761 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:33:04.142781 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:33:04.142797 | orchestrator | 2026-04-10 00:33:04.142813 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-04-10 00:33:04.142829 | orchestrator | Friday 10 April 2026 00:32:34 +0000 (0:00:01.758) 0:07:26.370 ********** 2026-04-10 00:33:04.142844 | orchestrator | ok: [testbed-manager] 2026-04-10 00:33:04.142862 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:33:04.142878 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:33:04.142894 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:33:04.142910 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:33:04.142926 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:33:04.142942 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:33:04.142959 | orchestrator | 2026-04-10 00:33:04.142976 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-10 00:33:04.142997 | orchestrator | Friday 10 April 2026 00:32:36 +0000 (0:00:01.807) 0:07:28.178 ********** 2026-04-10 00:33:04.143014 | orchestrator | ok: [testbed-manager] 2026-04-10 00:33:04.143031 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:33:04.143047 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:33:04.143062 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:33:04.143078 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:33:04.143093 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:33:04.143107 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:33:04.143123 | orchestrator | 2026-04-10 00:33:04.143141 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-10 00:33:04.143158 | orchestrator | Friday 10 April 2026 00:32:37 +0000 (0:00:00.846) 0:07:29.024 ********** 2026-04-10 00:33:04.143177 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:33:04.143194 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:33:04.143209 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:33:04.143258 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:33:04.143276 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:33:04.143349 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:33:04.143367 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:33:04.143384 | orchestrator | 2026-04-10 00:33:04.143402 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-04-10 00:33:04.143419 | orchestrator | Friday 10 April 2026 00:32:38 +0000 (0:00:00.750) 0:07:29.774 ********** 2026-04-10 00:33:04.143436 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:33:04.143449 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:33:04.143458 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:33:04.143468 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:33:04.143478 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:33:04.143488 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:33:04.143497 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:33:04.143507 | orchestrator | 2026-04-10 00:33:04.143517 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-04-10 00:33:04.143527 | orchestrator | Friday 10 April 2026 00:32:38 +0000 (0:00:00.684) 0:07:30.458 ********** 2026-04-10 00:33:04.143537 | orchestrator | ok: [testbed-manager] 2026-04-10 00:33:04.143546 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:33:04.143557 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:33:04.143566 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:33:04.143576 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:33:04.143586 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:33:04.143596 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:33:04.143606 | orchestrator | 2026-04-10 00:33:04.143615 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-04-10 00:33:04.143625 | orchestrator | Friday 10 April 2026 00:32:39 +0000 (0:00:00.488) 0:07:30.947 ********** 2026-04-10 00:33:04.143635 | orchestrator | ok: [testbed-manager] 2026-04-10 00:33:04.143645 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:33:04.143655 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:33:04.143664 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:33:04.143674 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:33:04.143683 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:33:04.143693 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:33:04.143703 | orchestrator | 2026-04-10 00:33:04.143713 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-04-10 00:33:04.143723 | orchestrator | Friday 10 April 2026 00:32:39 +0000 (0:00:00.501) 0:07:31.448 ********** 2026-04-10 00:33:04.143732 | orchestrator | ok: [testbed-manager] 2026-04-10 00:33:04.143742 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:33:04.143751 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:33:04.143761 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:33:04.143771 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:33:04.143780 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:33:04.143790 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:33:04.143799 | orchestrator | 2026-04-10 00:33:04.143809 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-04-10 00:33:04.143819 | orchestrator | Friday 10 April 2026 00:32:40 +0000 (0:00:00.509) 0:07:31.958 ********** 2026-04-10 00:33:04.143829 | orchestrator | ok: [testbed-manager] 2026-04-10 00:33:04.143838 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:33:04.143848 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:33:04.143862 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:33:04.143878 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:33:04.143895 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:33:04.143931 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:33:04.143949 | orchestrator | 2026-04-10 00:33:04.143993 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-04-10 00:33:04.144012 | orchestrator | Friday 10 April 2026 00:32:45 +0000 (0:00:05.526) 0:07:37.485 ********** 2026-04-10 00:33:04.144028 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:33:04.144044 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:33:04.144066 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:33:04.144075 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:33:04.144085 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:33:04.144094 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:33:04.144104 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:33:04.144114 | orchestrator | 2026-04-10 00:33:04.144124 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-04-10 00:33:04.144133 | orchestrator | Friday 10 April 2026 00:32:46 +0000 (0:00:00.721) 0:07:38.206 ********** 2026-04-10 00:33:04.144145 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:33:04.144158 | orchestrator | 2026-04-10 00:33:04.144168 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-04-10 00:33:04.144178 | orchestrator | Friday 10 April 2026 00:32:47 +0000 (0:00:00.793) 0:07:39.000 ********** 2026-04-10 00:33:04.144188 | orchestrator | ok: [testbed-manager] 2026-04-10 00:33:04.144197 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:33:04.144207 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:33:04.144217 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:33:04.144226 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:33:04.144236 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:33:04.144246 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:33:04.144255 | orchestrator | 2026-04-10 00:33:04.144265 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-04-10 00:33:04.144275 | orchestrator | Friday 10 April 2026 00:32:49 +0000 (0:00:02.177) 0:07:41.177 ********** 2026-04-10 00:33:04.144317 | orchestrator | ok: [testbed-manager] 2026-04-10 00:33:04.144330 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:33:04.144340 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:33:04.144349 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:33:04.144359 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:33:04.144369 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:33:04.144378 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:33:04.144388 | orchestrator | 2026-04-10 00:33:04.144398 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-04-10 00:33:04.144408 | orchestrator | Friday 10 April 2026 00:32:50 +0000 (0:00:01.334) 0:07:42.512 ********** 2026-04-10 00:33:04.144418 | orchestrator | ok: [testbed-manager] 2026-04-10 00:33:04.144428 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:33:04.144437 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:33:04.144447 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:33:04.144456 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:33:04.144466 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:33:04.144476 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:33:04.144486 | orchestrator | 2026-04-10 00:33:04.144495 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-04-10 00:33:04.144507 | orchestrator | Friday 10 April 2026 00:32:51 +0000 (0:00:00.835) 0:07:43.347 ********** 2026-04-10 00:33:04.144522 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-10 00:33:04.144540 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-10 00:33:04.144557 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-10 00:33:04.144572 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-10 00:33:04.144600 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-10 00:33:04.144617 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-10 00:33:04.144646 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-10 00:33:04.144660 | orchestrator | 2026-04-10 00:33:04.144676 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-04-10 00:33:04.144694 | orchestrator | Friday 10 April 2026 00:32:53 +0000 (0:00:01.677) 0:07:45.025 ********** 2026-04-10 00:33:04.144711 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:33:04.144729 | orchestrator | 2026-04-10 00:33:04.144744 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-04-10 00:33:04.144760 | orchestrator | Friday 10 April 2026 00:32:54 +0000 (0:00:00.899) 0:07:45.924 ********** 2026-04-10 00:33:04.144776 | orchestrator | changed: [testbed-manager] 2026-04-10 00:33:04.144792 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:33:04.144806 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:33:04.144822 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:33:04.144839 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:33:04.144855 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:33:04.144871 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:33:04.144886 | orchestrator | 2026-04-10 00:33:04.144915 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-04-10 00:33:35.077065 | orchestrator | Friday 10 April 2026 00:33:04 +0000 (0:00:09.918) 0:07:55.842 ********** 2026-04-10 00:33:35.077158 | orchestrator | ok: [testbed-manager] 2026-04-10 00:33:35.077173 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:33:35.077182 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:33:35.077191 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:33:35.077199 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:33:35.077207 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:33:35.077216 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:33:35.077224 | orchestrator | 2026-04-10 00:33:35.077234 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-04-10 00:33:35.077243 | orchestrator | Friday 10 April 2026 00:33:05 +0000 (0:00:01.790) 0:07:57.632 ********** 2026-04-10 00:33:35.077251 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:33:35.077259 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:33:35.077267 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:33:35.077276 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:33:35.077283 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:33:35.077292 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:33:35.077341 | orchestrator | 2026-04-10 00:33:35.077349 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-04-10 00:33:35.077358 | orchestrator | Friday 10 April 2026 00:33:07 +0000 (0:00:01.497) 0:07:59.130 ********** 2026-04-10 00:33:35.077366 | orchestrator | changed: [testbed-manager] 2026-04-10 00:33:35.077375 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:33:35.077383 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:33:35.077391 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:33:35.077400 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:33:35.077408 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:33:35.077416 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:33:35.077424 | orchestrator | 2026-04-10 00:33:35.077432 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-04-10 00:33:35.077440 | orchestrator | 2026-04-10 00:33:35.077448 | orchestrator | TASK [Include hardening role] ************************************************** 2026-04-10 00:33:35.077456 | orchestrator | Friday 10 April 2026 00:33:08 +0000 (0:00:01.230) 0:08:00.361 ********** 2026-04-10 00:33:35.077464 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:33:35.077472 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:33:35.077500 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:33:35.077509 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:33:35.077517 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:33:35.077525 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:33:35.077533 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:33:35.077541 | orchestrator | 2026-04-10 00:33:35.077549 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-04-10 00:33:35.077556 | orchestrator | 2026-04-10 00:33:35.077563 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-04-10 00:33:35.077570 | orchestrator | Friday 10 April 2026 00:33:09 +0000 (0:00:00.480) 0:08:00.841 ********** 2026-04-10 00:33:35.077578 | orchestrator | changed: [testbed-manager] 2026-04-10 00:33:35.077586 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:33:35.077594 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:33:35.077602 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:33:35.077610 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:33:35.077618 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:33:35.077626 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:33:35.077634 | orchestrator | 2026-04-10 00:33:35.077642 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-04-10 00:33:35.077650 | orchestrator | Friday 10 April 2026 00:33:10 +0000 (0:00:01.438) 0:08:02.279 ********** 2026-04-10 00:33:35.077658 | orchestrator | ok: [testbed-manager] 2026-04-10 00:33:35.077666 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:33:35.077674 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:33:35.077682 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:33:35.077690 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:33:35.077699 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:33:35.077706 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:33:35.077714 | orchestrator | 2026-04-10 00:33:35.077722 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-04-10 00:33:35.077730 | orchestrator | Friday 10 April 2026 00:33:12 +0000 (0:00:01.670) 0:08:03.950 ********** 2026-04-10 00:33:35.077738 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:33:35.077746 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:33:35.077766 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:33:35.077773 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:33:35.077780 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:33:35.077788 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:33:35.077796 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:33:35.077804 | orchestrator | 2026-04-10 00:33:35.077812 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-04-10 00:33:35.077820 | orchestrator | Friday 10 April 2026 00:33:12 +0000 (0:00:00.497) 0:08:04.448 ********** 2026-04-10 00:33:35.077827 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:33:35.077837 | orchestrator | 2026-04-10 00:33:35.077844 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-04-10 00:33:35.077851 | orchestrator | Friday 10 April 2026 00:33:13 +0000 (0:00:00.825) 0:08:05.273 ********** 2026-04-10 00:33:35.077860 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:33:35.077869 | orchestrator | 2026-04-10 00:33:35.077876 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-04-10 00:33:35.077884 | orchestrator | Friday 10 April 2026 00:33:14 +0000 (0:00:00.942) 0:08:06.216 ********** 2026-04-10 00:33:35.077891 | orchestrator | changed: [testbed-manager] 2026-04-10 00:33:35.077898 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:33:35.077905 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:33:35.077912 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:33:35.077920 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:33:35.077933 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:33:35.077940 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:33:35.077948 | orchestrator | 2026-04-10 00:33:35.077972 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-04-10 00:33:35.077980 | orchestrator | Friday 10 April 2026 00:33:23 +0000 (0:00:09.136) 0:08:15.353 ********** 2026-04-10 00:33:35.077987 | orchestrator | changed: [testbed-manager] 2026-04-10 00:33:35.077994 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:33:35.078002 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:33:35.078009 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:33:35.078061 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:33:35.078069 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:33:35.078077 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:33:35.078085 | orchestrator | 2026-04-10 00:33:35.078092 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-04-10 00:33:35.078100 | orchestrator | Friday 10 April 2026 00:33:24 +0000 (0:00:00.916) 0:08:16.269 ********** 2026-04-10 00:33:35.078108 | orchestrator | changed: [testbed-manager] 2026-04-10 00:33:35.078116 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:33:35.078123 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:33:35.078131 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:33:35.078138 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:33:35.078146 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:33:35.078154 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:33:35.078161 | orchestrator | 2026-04-10 00:33:35.078169 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-04-10 00:33:35.078176 | orchestrator | Friday 10 April 2026 00:33:25 +0000 (0:00:01.331) 0:08:17.601 ********** 2026-04-10 00:33:35.078184 | orchestrator | changed: [testbed-manager] 2026-04-10 00:33:35.078191 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:33:35.078199 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:33:35.078207 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:33:35.078214 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:33:35.078222 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:33:35.078229 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:33:35.078237 | orchestrator | 2026-04-10 00:33:35.078245 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-04-10 00:33:35.078252 | orchestrator | Friday 10 April 2026 00:33:27 +0000 (0:00:02.031) 0:08:19.632 ********** 2026-04-10 00:33:35.078260 | orchestrator | changed: [testbed-manager] 2026-04-10 00:33:35.078267 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:33:35.078275 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:33:35.078282 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:33:35.078290 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:33:35.078354 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:33:35.078362 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:33:35.078369 | orchestrator | 2026-04-10 00:33:35.078376 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-04-10 00:33:35.078383 | orchestrator | Friday 10 April 2026 00:33:29 +0000 (0:00:01.248) 0:08:20.880 ********** 2026-04-10 00:33:35.078391 | orchestrator | changed: [testbed-manager] 2026-04-10 00:33:35.078398 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:33:35.078405 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:33:35.078412 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:33:35.078419 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:33:35.078427 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:33:35.078434 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:33:35.078441 | orchestrator | 2026-04-10 00:33:35.078448 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-04-10 00:33:35.078455 | orchestrator | 2026-04-10 00:33:35.078462 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-04-10 00:33:35.078469 | orchestrator | Friday 10 April 2026 00:33:30 +0000 (0:00:01.075) 0:08:21.956 ********** 2026-04-10 00:33:35.078483 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:33:35.078491 | orchestrator | 2026-04-10 00:33:35.078498 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-10 00:33:35.078506 | orchestrator | Friday 10 April 2026 00:33:31 +0000 (0:00:00.912) 0:08:22.869 ********** 2026-04-10 00:33:35.078513 | orchestrator | ok: [testbed-manager] 2026-04-10 00:33:35.078520 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:33:35.078532 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:33:35.078540 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:33:35.078614 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:33:35.078623 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:33:35.078630 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:33:35.078638 | orchestrator | 2026-04-10 00:33:35.078645 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-10 00:33:35.078653 | orchestrator | Friday 10 April 2026 00:33:32 +0000 (0:00:00.915) 0:08:23.784 ********** 2026-04-10 00:33:35.078660 | orchestrator | changed: [testbed-manager] 2026-04-10 00:33:35.078667 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:33:35.078674 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:33:35.078681 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:33:35.078689 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:33:35.078696 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:33:35.078703 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:33:35.078710 | orchestrator | 2026-04-10 00:33:35.078717 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-04-10 00:33:35.078725 | orchestrator | Friday 10 April 2026 00:33:33 +0000 (0:00:01.314) 0:08:25.099 ********** 2026-04-10 00:33:35.078732 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:33:35.078739 | orchestrator | 2026-04-10 00:33:35.078747 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-10 00:33:35.078754 | orchestrator | Friday 10 April 2026 00:33:34 +0000 (0:00:00.804) 0:08:25.904 ********** 2026-04-10 00:33:35.078761 | orchestrator | ok: [testbed-manager] 2026-04-10 00:33:35.078768 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:33:35.078776 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:33:35.078783 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:33:35.078790 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:33:35.078797 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:33:35.078804 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:33:35.078811 | orchestrator | 2026-04-10 00:33:35.078827 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-10 00:33:36.691811 | orchestrator | Friday 10 April 2026 00:33:35 +0000 (0:00:00.872) 0:08:26.776 ********** 2026-04-10 00:33:36.691953 | orchestrator | changed: [testbed-manager] 2026-04-10 00:33:36.691971 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:33:36.691982 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:33:36.691992 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:33:36.692002 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:33:36.692773 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:33:36.692793 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:33:36.692812 | orchestrator | 2026-04-10 00:33:36.692830 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:33:36.692847 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-10 00:33:36.692866 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-10 00:33:36.692883 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-10 00:33:36.692934 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-10 00:33:36.692954 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-10 00:33:36.692971 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-10 00:33:36.692989 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-10 00:33:36.693002 | orchestrator | 2026-04-10 00:33:36.693012 | orchestrator | 2026-04-10 00:33:36.693023 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:33:36.693033 | orchestrator | Friday 10 April 2026 00:33:36 +0000 (0:00:01.312) 0:08:28.089 ********** 2026-04-10 00:33:36.693043 | orchestrator | =============================================================================== 2026-04-10 00:33:36.693053 | orchestrator | osism.commons.packages : Install required packages --------------------- 76.63s 2026-04-10 00:33:36.693063 | orchestrator | osism.commons.packages : Download required packages -------------------- 49.87s 2026-04-10 00:33:36.693073 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.25s 2026-04-10 00:33:36.693083 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.06s 2026-04-10 00:33:36.693093 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.81s 2026-04-10 00:33:36.693103 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.50s 2026-04-10 00:33:36.693112 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.08s 2026-04-10 00:33:36.693123 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.47s 2026-04-10 00:33:36.693133 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.92s 2026-04-10 00:33:36.693143 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.76s 2026-04-10 00:33:36.693152 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 9.65s 2026-04-10 00:33:36.693177 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 9.43s 2026-04-10 00:33:36.693188 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.14s 2026-04-10 00:33:36.693198 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.79s 2026-04-10 00:33:36.693208 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.58s 2026-04-10 00:33:36.693218 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.81s 2026-04-10 00:33:36.693228 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.14s 2026-04-10 00:33:36.693238 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 7.12s 2026-04-10 00:33:36.693247 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 6.79s 2026-04-10 00:33:36.693257 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.53s 2026-04-10 00:33:36.880220 | orchestrator | + osism apply fail2ban 2026-04-10 00:33:48.549573 | orchestrator | 2026-04-10 00:33:48 | INFO  | Prepare task for execution of fail2ban. 2026-04-10 00:33:48.623094 | orchestrator | 2026-04-10 00:33:48 | INFO  | Task 9d57f2de-04f7-4427-a6db-0b4eece37e3b (fail2ban) was prepared for execution. 2026-04-10 00:33:48.623192 | orchestrator | 2026-04-10 00:33:48 | INFO  | It takes a moment until task 9d57f2de-04f7-4427-a6db-0b4eece37e3b (fail2ban) has been started and output is visible here. 2026-04-10 00:34:10.011970 | orchestrator | 2026-04-10 00:34:10.012089 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-04-10 00:34:10.012138 | orchestrator | 2026-04-10 00:34:10.012154 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-04-10 00:34:10.012167 | orchestrator | Friday 10 April 2026 00:33:51 +0000 (0:00:00.257) 0:00:00.257 ********** 2026-04-10 00:34:10.012182 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:34:10.012199 | orchestrator | 2026-04-10 00:34:10.012212 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-04-10 00:34:10.012225 | orchestrator | Friday 10 April 2026 00:33:52 +0000 (0:00:01.022) 0:00:01.280 ********** 2026-04-10 00:34:10.012238 | orchestrator | changed: [testbed-manager] 2026-04-10 00:34:10.012252 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:34:10.012266 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:34:10.012279 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:34:10.012293 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:34:10.012379 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:34:10.012393 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:34:10.012407 | orchestrator | 2026-04-10 00:34:10.012421 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-04-10 00:34:10.012436 | orchestrator | Friday 10 April 2026 00:34:04 +0000 (0:00:11.435) 0:00:12.715 ********** 2026-04-10 00:34:10.012450 | orchestrator | changed: [testbed-manager] 2026-04-10 00:34:10.012464 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:34:10.012478 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:34:10.012492 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:34:10.012506 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:34:10.012520 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:34:10.012536 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:34:10.012550 | orchestrator | 2026-04-10 00:34:10.012566 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-04-10 00:34:10.012581 | orchestrator | Friday 10 April 2026 00:34:06 +0000 (0:00:01.847) 0:00:14.563 ********** 2026-04-10 00:34:10.012596 | orchestrator | ok: [testbed-manager] 2026-04-10 00:34:10.012612 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:34:10.012628 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:34:10.012644 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:34:10.012659 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:34:10.012674 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:34:10.012688 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:34:10.012702 | orchestrator | 2026-04-10 00:34:10.012716 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-04-10 00:34:10.012730 | orchestrator | Friday 10 April 2026 00:34:07 +0000 (0:00:01.250) 0:00:15.814 ********** 2026-04-10 00:34:10.012744 | orchestrator | changed: [testbed-manager] 2026-04-10 00:34:10.012759 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:34:10.012775 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:34:10.012791 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:34:10.012812 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:34:10.012828 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:34:10.012846 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:34:10.012860 | orchestrator | 2026-04-10 00:34:10.012874 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:34:10.012889 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:34:10.012904 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:34:10.012918 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:34:10.012931 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:34:10.012975 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:34:10.012990 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:34:10.013005 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:34:10.013019 | orchestrator | 2026-04-10 00:34:10.013033 | orchestrator | 2026-04-10 00:34:10.013048 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:34:10.013062 | orchestrator | Friday 10 April 2026 00:34:09 +0000 (0:00:01.990) 0:00:17.804 ********** 2026-04-10 00:34:10.013076 | orchestrator | =============================================================================== 2026-04-10 00:34:10.013088 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.44s 2026-04-10 00:34:10.013101 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.99s 2026-04-10 00:34:10.013113 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.85s 2026-04-10 00:34:10.013127 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.25s 2026-04-10 00:34:10.013141 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.02s 2026-04-10 00:34:10.195131 | orchestrator | + osism apply network 2026-04-10 00:34:21.501893 | orchestrator | 2026-04-10 00:34:21 | INFO  | Prepare task for execution of network. 2026-04-10 00:34:21.586009 | orchestrator | 2026-04-10 00:34:21 | INFO  | Task acc7f2e6-ed02-4eaf-8461-45a499ca3d21 (network) was prepared for execution. 2026-04-10 00:34:21.586157 | orchestrator | 2026-04-10 00:34:21 | INFO  | It takes a moment until task acc7f2e6-ed02-4eaf-8461-45a499ca3d21 (network) has been started and output is visible here. 2026-04-10 00:34:49.688455 | orchestrator | 2026-04-10 00:34:49.688563 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-04-10 00:34:49.688578 | orchestrator | 2026-04-10 00:34:49.688589 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-04-10 00:34:49.688601 | orchestrator | Friday 10 April 2026 00:34:24 +0000 (0:00:00.328) 0:00:00.328 ********** 2026-04-10 00:34:49.688611 | orchestrator | ok: [testbed-manager] 2026-04-10 00:34:49.688622 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:34:49.688632 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:34:49.688641 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:34:49.688651 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:34:49.688661 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:34:49.688671 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:34:49.688681 | orchestrator | 2026-04-10 00:34:49.688690 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-04-10 00:34:49.688700 | orchestrator | Friday 10 April 2026 00:34:25 +0000 (0:00:00.603) 0:00:00.932 ********** 2026-04-10 00:34:49.688711 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:34:49.688724 | orchestrator | 2026-04-10 00:34:49.688734 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-04-10 00:34:49.688744 | orchestrator | Friday 10 April 2026 00:34:26 +0000 (0:00:01.250) 0:00:02.182 ********** 2026-04-10 00:34:49.688753 | orchestrator | ok: [testbed-manager] 2026-04-10 00:34:49.688763 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:34:49.688773 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:34:49.688782 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:34:49.688792 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:34:49.688801 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:34:49.688832 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:34:49.688843 | orchestrator | 2026-04-10 00:34:49.688852 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-04-10 00:34:49.688862 | orchestrator | Friday 10 April 2026 00:34:29 +0000 (0:00:02.800) 0:00:04.983 ********** 2026-04-10 00:34:49.688872 | orchestrator | ok: [testbed-manager] 2026-04-10 00:34:49.688882 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:34:49.688891 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:34:49.688909 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:34:49.688925 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:34:49.688942 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:34:49.688960 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:34:49.688979 | orchestrator | 2026-04-10 00:34:49.688997 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-04-10 00:34:49.689009 | orchestrator | Friday 10 April 2026 00:34:31 +0000 (0:00:01.623) 0:00:06.606 ********** 2026-04-10 00:34:49.689021 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-04-10 00:34:49.689033 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-04-10 00:34:49.689043 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-04-10 00:34:49.689054 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-04-10 00:34:49.689065 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-04-10 00:34:49.689076 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-04-10 00:34:49.689087 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-04-10 00:34:49.689098 | orchestrator | 2026-04-10 00:34:49.689110 | orchestrator | TASK [osism.commons.network : Write network_netplan_config_template to temporary file] *** 2026-04-10 00:34:49.689122 | orchestrator | Friday 10 April 2026 00:34:32 +0000 (0:00:01.184) 0:00:07.791 ********** 2026-04-10 00:34:49.689132 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:34:49.689144 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:34:49.689155 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:34:49.689166 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:34:49.689176 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:34:49.689187 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:34:49.689198 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:34:49.689209 | orchestrator | 2026-04-10 00:34:49.689222 | orchestrator | TASK [osism.commons.network : Render netplan configuration from network_netplan_config_template variable] *** 2026-04-10 00:34:49.689234 | orchestrator | Friday 10 April 2026 00:34:33 +0000 (0:00:00.618) 0:00:08.409 ********** 2026-04-10 00:34:49.689245 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:34:49.689257 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:34:49.689268 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:34:49.689279 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:34:49.689292 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:34:49.689331 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:34:49.689344 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:34:49.689355 | orchestrator | 2026-04-10 00:34:49.689384 | orchestrator | TASK [osism.commons.network : Remove temporary network_netplan_config_template file] *** 2026-04-10 00:34:49.689396 | orchestrator | Friday 10 April 2026 00:34:33 +0000 (0:00:00.743) 0:00:09.152 ********** 2026-04-10 00:34:49.689407 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:34:49.689418 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:34:49.689429 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:34:49.689439 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:34:49.689450 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:34:49.689461 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:34:49.689472 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:34:49.689483 | orchestrator | 2026-04-10 00:34:49.689494 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-04-10 00:34:49.689505 | orchestrator | Friday 10 April 2026 00:34:34 +0000 (0:00:00.683) 0:00:09.836 ********** 2026-04-10 00:34:49.689516 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-10 00:34:49.689536 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-10 00:34:49.689547 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-10 00:34:49.689557 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-10 00:34:49.689568 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-10 00:34:49.689579 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-10 00:34:49.689590 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-10 00:34:49.689601 | orchestrator | 2026-04-10 00:34:49.689631 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-04-10 00:34:49.689643 | orchestrator | Friday 10 April 2026 00:34:37 +0000 (0:00:02.861) 0:00:12.698 ********** 2026-04-10 00:34:49.689654 | orchestrator | changed: [testbed-manager] 2026-04-10 00:34:49.689665 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:34:49.689676 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:34:49.689686 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:34:49.689697 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:34:49.689708 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:34:49.689719 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:34:49.689730 | orchestrator | 2026-04-10 00:34:49.689741 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-04-10 00:34:49.689752 | orchestrator | Friday 10 April 2026 00:34:38 +0000 (0:00:01.478) 0:00:14.176 ********** 2026-04-10 00:34:49.689763 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-10 00:34:49.689774 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-10 00:34:49.689784 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-10 00:34:49.689795 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-10 00:34:49.689806 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-10 00:34:49.689817 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-10 00:34:49.689828 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-10 00:34:49.689839 | orchestrator | 2026-04-10 00:34:49.689850 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-04-10 00:34:49.689861 | orchestrator | Friday 10 April 2026 00:34:40 +0000 (0:00:01.507) 0:00:15.684 ********** 2026-04-10 00:34:49.689872 | orchestrator | ok: [testbed-manager] 2026-04-10 00:34:49.689883 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:34:49.689894 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:34:49.689905 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:34:49.689916 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:34:49.689927 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:34:49.689938 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:34:49.689949 | orchestrator | 2026-04-10 00:34:49.689960 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-04-10 00:34:49.689971 | orchestrator | Friday 10 April 2026 00:34:41 +0000 (0:00:01.018) 0:00:16.702 ********** 2026-04-10 00:34:49.689982 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:34:49.689993 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:34:49.690004 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:34:49.690014 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:34:49.690089 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:34:49.690101 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:34:49.690112 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:34:49.690123 | orchestrator | 2026-04-10 00:34:49.690134 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-04-10 00:34:49.690145 | orchestrator | Friday 10 April 2026 00:34:41 +0000 (0:00:00.584) 0:00:17.286 ********** 2026-04-10 00:34:49.690156 | orchestrator | ok: [testbed-manager] 2026-04-10 00:34:49.690167 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:34:49.690178 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:34:49.690189 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:34:49.690199 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:34:49.690210 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:34:49.690221 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:34:49.690232 | orchestrator | 2026-04-10 00:34:49.690243 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-04-10 00:34:49.690262 | orchestrator | Friday 10 April 2026 00:34:44 +0000 (0:00:02.657) 0:00:19.944 ********** 2026-04-10 00:34:49.690273 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:34:49.690284 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:34:49.690295 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:34:49.690331 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:34:49.690343 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:34:49.690353 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:34:49.690365 | orchestrator | changed: [testbed-manager] => (item={'src': '/opt/configuration/network/iptables.sh', 'dest': 'routable.d/iptables.sh'}) 2026-04-10 00:34:49.690377 | orchestrator | 2026-04-10 00:34:49.690388 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-04-10 00:34:49.690415 | orchestrator | Friday 10 April 2026 00:34:45 +0000 (0:00:00.771) 0:00:20.715 ********** 2026-04-10 00:34:49.690441 | orchestrator | ok: [testbed-manager] 2026-04-10 00:34:49.690462 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:34:49.690480 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:34:49.690497 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:34:49.690514 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:34:49.690533 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:34:49.690551 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:34:49.690570 | orchestrator | 2026-04-10 00:34:49.690589 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-04-10 00:34:49.690607 | orchestrator | Friday 10 April 2026 00:34:46 +0000 (0:00:01.564) 0:00:22.280 ********** 2026-04-10 00:34:49.690621 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:34:49.690635 | orchestrator | 2026-04-10 00:34:49.690654 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-10 00:34:49.690672 | orchestrator | Friday 10 April 2026 00:34:48 +0000 (0:00:01.146) 0:00:23.426 ********** 2026-04-10 00:34:49.690690 | orchestrator | ok: [testbed-manager] 2026-04-10 00:34:49.690708 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:34:49.690726 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:34:49.690742 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:34:49.690762 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:34:49.690778 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:34:49.690794 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:34:49.690812 | orchestrator | 2026-04-10 00:34:49.690831 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-04-10 00:34:49.690851 | orchestrator | Friday 10 April 2026 00:34:49 +0000 (0:00:01.153) 0:00:24.579 ********** 2026-04-10 00:34:49.690870 | orchestrator | ok: [testbed-manager] 2026-04-10 00:34:49.690889 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:34:49.690909 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:34:49.690926 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:34:49.690944 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:34:49.690976 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:35:05.942448 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:35:05.942589 | orchestrator | 2026-04-10 00:35:05.942609 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-10 00:35:05.942624 | orchestrator | Friday 10 April 2026 00:34:49 +0000 (0:00:00.619) 0:00:25.198 ********** 2026-04-10 00:35:05.942636 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-04-10 00:35:05.942647 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-04-10 00:35:05.942658 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-04-10 00:35:05.942670 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-04-10 00:35:05.942681 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-10 00:35:05.942716 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-04-10 00:35:05.942728 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-10 00:35:05.942739 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-10 00:35:05.942750 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-10 00:35:05.942761 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-04-10 00:35:05.942772 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-10 00:35:05.942783 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-04-10 00:35:05.942794 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-10 00:35:05.942805 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-10 00:35:05.942816 | orchestrator | 2026-04-10 00:35:05.942827 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-04-10 00:35:05.942838 | orchestrator | Friday 10 April 2026 00:34:50 +0000 (0:00:01.176) 0:00:26.375 ********** 2026-04-10 00:35:05.942849 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:35:05.942861 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:35:05.942872 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:35:05.942883 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:35:05.942893 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:35:05.942904 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:35:05.942915 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:35:05.942926 | orchestrator | 2026-04-10 00:35:05.942939 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-04-10 00:35:05.942952 | orchestrator | Friday 10 April 2026 00:34:51 +0000 (0:00:00.539) 0:00:26.914 ********** 2026-04-10 00:35:05.942966 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-manager, testbed-node-2, testbed-node-1, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:35:05.942982 | orchestrator | 2026-04-10 00:35:05.942995 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-04-10 00:35:05.943008 | orchestrator | Friday 10 April 2026 00:34:55 +0000 (0:00:04.024) 0:00:30.938 ********** 2026-04-10 00:35:05.943023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-10 00:35:05.943052 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-10 00:35:05.943067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-10 00:35:05.943081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-10 00:35:05.943094 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-10 00:35:05.943106 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-10 00:35:05.943155 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-10 00:35:05.943176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-10 00:35:05.943187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-10 00:35:05.943198 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-10 00:35:05.943211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-10 00:35:05.943222 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-10 00:35:05.943233 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-10 00:35:05.943245 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-10 00:35:05.943256 | orchestrator | 2026-04-10 00:35:05.943267 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-04-10 00:35:05.943278 | orchestrator | Friday 10 April 2026 00:35:01 +0000 (0:00:05.710) 0:00:36.649 ********** 2026-04-10 00:35:05.943290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-10 00:35:05.943327 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-10 00:35:05.943345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-10 00:35:05.943356 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-10 00:35:05.943373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-10 00:35:05.943412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-10 00:35:05.943437 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-10 00:35:05.943466 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-10 00:35:17.268539 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-10 00:35:17.268681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-10 00:35:17.268712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-10 00:35:17.268730 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-10 00:35:17.268749 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-10 00:35:17.268767 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-10 00:35:17.268786 | orchestrator | 2026-04-10 00:35:17.268807 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-04-10 00:35:17.268827 | orchestrator | Friday 10 April 2026 00:35:06 +0000 (0:00:05.699) 0:00:42.349 ********** 2026-04-10 00:35:17.268848 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:35:17.268868 | orchestrator | 2026-04-10 00:35:17.268881 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-10 00:35:17.268892 | orchestrator | Friday 10 April 2026 00:35:08 +0000 (0:00:01.199) 0:00:43.548 ********** 2026-04-10 00:35:17.268903 | orchestrator | ok: [testbed-manager] 2026-04-10 00:35:17.268915 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:35:17.268926 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:35:17.268937 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:35:17.268949 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:35:17.268960 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:35:17.268971 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:35:17.268981 | orchestrator | 2026-04-10 00:35:17.269018 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-10 00:35:17.269045 | orchestrator | Friday 10 April 2026 00:35:09 +0000 (0:00:00.928) 0:00:44.477 ********** 2026-04-10 00:35:17.269059 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-10 00:35:17.269075 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-10 00:35:17.269095 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-10 00:35:17.269114 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-10 00:35:17.269133 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:35:17.269178 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-10 00:35:17.269214 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-10 00:35:17.269233 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-10 00:35:17.269254 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-10 00:35:17.269268 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:35:17.269280 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-10 00:35:17.269293 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-10 00:35:17.269335 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-10 00:35:17.269355 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-10 00:35:17.269372 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:35:17.269384 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-10 00:35:17.269397 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-10 00:35:17.269409 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-10 00:35:17.269442 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-10 00:35:17.269453 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:35:17.269463 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-10 00:35:17.269473 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-10 00:35:17.269482 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-10 00:35:17.269492 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-10 00:35:17.269502 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:35:17.269511 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-10 00:35:17.269521 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-10 00:35:17.269531 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-10 00:35:17.269541 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-10 00:35:17.269551 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:35:17.269561 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-10 00:35:17.269570 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-10 00:35:17.269580 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-10 00:35:17.269590 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-10 00:35:17.269599 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:35:17.269609 | orchestrator | 2026-04-10 00:35:17.269619 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-04-10 00:35:17.269640 | orchestrator | Friday 10 April 2026 00:35:09 +0000 (0:00:00.793) 0:00:45.270 ********** 2026-04-10 00:35:17.269651 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:35:17.269661 | orchestrator | 2026-04-10 00:35:17.269671 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-04-10 00:35:17.269680 | orchestrator | Friday 10 April 2026 00:35:10 +0000 (0:00:01.075) 0:00:46.346 ********** 2026-04-10 00:35:17.269690 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:35:17.269700 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:35:17.269709 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:35:17.269719 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:35:17.269742 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:35:17.269752 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:35:17.269771 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:35:17.269781 | orchestrator | 2026-04-10 00:35:17.269791 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-04-10 00:35:17.269801 | orchestrator | Friday 10 April 2026 00:35:11 +0000 (0:00:00.542) 0:00:46.888 ********** 2026-04-10 00:35:17.269811 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:35:17.269820 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:35:17.269830 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:35:17.269839 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:35:17.269849 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:35:17.269858 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:35:17.269868 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:35:17.269877 | orchestrator | 2026-04-10 00:35:17.269894 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-04-10 00:35:17.269904 | orchestrator | Friday 10 April 2026 00:35:12 +0000 (0:00:00.621) 0:00:47.510 ********** 2026-04-10 00:35:17.269914 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:35:17.269924 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:35:17.269934 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:35:17.269943 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:35:17.269953 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:35:17.269962 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:35:17.269972 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:35:17.269982 | orchestrator | 2026-04-10 00:35:17.269991 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-04-10 00:35:17.270001 | orchestrator | Friday 10 April 2026 00:35:12 +0000 (0:00:00.538) 0:00:48.048 ********** 2026-04-10 00:35:17.270011 | orchestrator | ok: [testbed-manager] 2026-04-10 00:35:17.270086 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:35:17.270095 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:35:17.270105 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:35:17.270115 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:35:17.270124 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:35:17.270134 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:35:17.270144 | orchestrator | 2026-04-10 00:35:17.270154 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-04-10 00:35:17.270164 | orchestrator | Friday 10 April 2026 00:35:14 +0000 (0:00:01.641) 0:00:49.690 ********** 2026-04-10 00:35:17.270174 | orchestrator | ok: [testbed-manager] 2026-04-10 00:35:17.270183 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:35:17.270193 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:35:17.270202 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:35:17.270212 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:35:17.270221 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:35:17.270231 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:35:17.270241 | orchestrator | 2026-04-10 00:35:17.270251 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-04-10 00:35:17.270260 | orchestrator | Friday 10 April 2026 00:35:15 +0000 (0:00:01.019) 0:00:50.709 ********** 2026-04-10 00:35:17.270277 | orchestrator | ok: [testbed-manager] 2026-04-10 00:35:17.270287 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:35:17.270297 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:35:17.270362 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:35:17.270373 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:35:17.270383 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:35:17.270392 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:35:17.270402 | orchestrator | 2026-04-10 00:35:17.270421 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-04-10 00:35:18.693196 | orchestrator | Friday 10 April 2026 00:35:17 +0000 (0:00:01.939) 0:00:52.649 ********** 2026-04-10 00:35:18.693287 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:35:18.693373 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:35:18.693388 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:35:18.693399 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:35:18.693408 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:35:18.693415 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:35:18.693421 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:35:18.693428 | orchestrator | 2026-04-10 00:35:18.693436 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-04-10 00:35:18.693444 | orchestrator | Friday 10 April 2026 00:35:17 +0000 (0:00:00.711) 0:00:53.360 ********** 2026-04-10 00:35:18.693451 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:35:18.693458 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:35:18.693464 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:35:18.693471 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:35:18.693477 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:35:18.693483 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:35:18.693489 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:35:18.693496 | orchestrator | 2026-04-10 00:35:18.693502 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:35:18.693510 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-10 00:35:18.693518 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-10 00:35:18.693525 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-10 00:35:18.693531 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-10 00:35:18.693538 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-10 00:35:18.693544 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-10 00:35:18.693550 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-10 00:35:18.693557 | orchestrator | 2026-04-10 00:35:18.693567 | orchestrator | 2026-04-10 00:35:18.693574 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:35:18.693580 | orchestrator | Friday 10 April 2026 00:35:18 +0000 (0:00:00.472) 0:00:53.832 ********** 2026-04-10 00:35:18.693587 | orchestrator | =============================================================================== 2026-04-10 00:35:18.693593 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.71s 2026-04-10 00:35:18.693599 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.70s 2026-04-10 00:35:18.693606 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.03s 2026-04-10 00:35:18.693634 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 2.86s 2026-04-10 00:35:18.693641 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.80s 2026-04-10 00:35:18.693647 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.66s 2026-04-10 00:35:18.693653 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 1.94s 2026-04-10 00:35:18.693660 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.64s 2026-04-10 00:35:18.693666 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.62s 2026-04-10 00:35:18.693672 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.56s 2026-04-10 00:35:18.693679 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.51s 2026-04-10 00:35:18.693685 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.48s 2026-04-10 00:35:18.693691 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.25s 2026-04-10 00:35:18.693698 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.20s 2026-04-10 00:35:18.693704 | orchestrator | osism.commons.network : Create required directories --------------------- 1.18s 2026-04-10 00:35:18.693710 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.18s 2026-04-10 00:35:18.693716 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.15s 2026-04-10 00:35:18.693723 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.15s 2026-04-10 00:35:18.693729 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.08s 2026-04-10 00:35:18.693735 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 1.02s 2026-04-10 00:35:18.821460 | orchestrator | + osism apply wireguard 2026-04-10 00:35:29.992830 | orchestrator | 2026-04-10 00:35:29 | INFO  | Prepare task for execution of wireguard. 2026-04-10 00:35:30.061407 | orchestrator | 2026-04-10 00:35:30 | INFO  | Task 51fd4b70-df5f-4ef4-993b-6f4e4acbc775 (wireguard) was prepared for execution. 2026-04-10 00:35:30.061507 | orchestrator | 2026-04-10 00:35:30 | INFO  | It takes a moment until task 51fd4b70-df5f-4ef4-993b-6f4e4acbc775 (wireguard) has been started and output is visible here. 2026-04-10 00:35:47.891965 | orchestrator | 2026-04-10 00:35:47.892076 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-04-10 00:35:47.892093 | orchestrator | 2026-04-10 00:35:47.892105 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-04-10 00:35:47.892116 | orchestrator | Friday 10 April 2026 00:35:32 +0000 (0:00:00.251) 0:00:00.251 ********** 2026-04-10 00:35:47.892129 | orchestrator | ok: [testbed-manager] 2026-04-10 00:35:47.892141 | orchestrator | 2026-04-10 00:35:47.892153 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-04-10 00:35:47.892164 | orchestrator | Friday 10 April 2026 00:35:34 +0000 (0:00:01.462) 0:00:01.713 ********** 2026-04-10 00:35:47.892175 | orchestrator | changed: [testbed-manager] 2026-04-10 00:35:47.892187 | orchestrator | 2026-04-10 00:35:47.892198 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-04-10 00:35:47.892210 | orchestrator | Friday 10 April 2026 00:35:41 +0000 (0:00:06.614) 0:00:08.328 ********** 2026-04-10 00:35:47.892221 | orchestrator | changed: [testbed-manager] 2026-04-10 00:35:47.892232 | orchestrator | 2026-04-10 00:35:47.892243 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-04-10 00:35:47.892276 | orchestrator | Friday 10 April 2026 00:35:41 +0000 (0:00:00.474) 0:00:08.802 ********** 2026-04-10 00:35:47.892288 | orchestrator | changed: [testbed-manager] 2026-04-10 00:35:47.892359 | orchestrator | 2026-04-10 00:35:47.892373 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-04-10 00:35:47.892385 | orchestrator | Friday 10 April 2026 00:35:41 +0000 (0:00:00.389) 0:00:09.191 ********** 2026-04-10 00:35:47.892396 | orchestrator | ok: [testbed-manager] 2026-04-10 00:35:47.892430 | orchestrator | 2026-04-10 00:35:47.892442 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-04-10 00:35:47.892453 | orchestrator | Friday 10 April 2026 00:35:42 +0000 (0:00:00.480) 0:00:09.671 ********** 2026-04-10 00:35:47.892464 | orchestrator | ok: [testbed-manager] 2026-04-10 00:35:47.892475 | orchestrator | 2026-04-10 00:35:47.892486 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-04-10 00:35:47.892497 | orchestrator | Friday 10 April 2026 00:35:42 +0000 (0:00:00.373) 0:00:10.045 ********** 2026-04-10 00:35:47.892509 | orchestrator | ok: [testbed-manager] 2026-04-10 00:35:47.892522 | orchestrator | 2026-04-10 00:35:47.892535 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-04-10 00:35:47.892548 | orchestrator | Friday 10 April 2026 00:35:43 +0000 (0:00:00.392) 0:00:10.437 ********** 2026-04-10 00:35:47.892561 | orchestrator | changed: [testbed-manager] 2026-04-10 00:35:47.892573 | orchestrator | 2026-04-10 00:35:47.892586 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-04-10 00:35:47.892599 | orchestrator | Friday 10 April 2026 00:35:44 +0000 (0:00:01.073) 0:00:11.511 ********** 2026-04-10 00:35:47.892611 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-10 00:35:47.892624 | orchestrator | changed: [testbed-manager] 2026-04-10 00:35:47.892636 | orchestrator | 2026-04-10 00:35:47.892649 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-04-10 00:35:47.892662 | orchestrator | Friday 10 April 2026 00:35:45 +0000 (0:00:00.849) 0:00:12.360 ********** 2026-04-10 00:35:47.892674 | orchestrator | changed: [testbed-manager] 2026-04-10 00:35:47.892687 | orchestrator | 2026-04-10 00:35:47.892700 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-04-10 00:35:47.892717 | orchestrator | Friday 10 April 2026 00:35:46 +0000 (0:00:01.806) 0:00:14.167 ********** 2026-04-10 00:35:47.892730 | orchestrator | changed: [testbed-manager] 2026-04-10 00:35:47.892743 | orchestrator | 2026-04-10 00:35:47.892755 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:35:47.892769 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:35:47.892783 | orchestrator | 2026-04-10 00:35:47.892796 | orchestrator | 2026-04-10 00:35:47.892809 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:35:47.892822 | orchestrator | Friday 10 April 2026 00:35:47 +0000 (0:00:00.841) 0:00:15.009 ********** 2026-04-10 00:35:47.892834 | orchestrator | =============================================================================== 2026-04-10 00:35:47.892847 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.61s 2026-04-10 00:35:47.892861 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.81s 2026-04-10 00:35:47.892872 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.46s 2026-04-10 00:35:47.892883 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.07s 2026-04-10 00:35:47.892894 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.85s 2026-04-10 00:35:47.892905 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.84s 2026-04-10 00:35:47.892916 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.48s 2026-04-10 00:35:47.892927 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.47s 2026-04-10 00:35:47.892938 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.39s 2026-04-10 00:35:47.892949 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.39s 2026-04-10 00:35:47.892960 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.37s 2026-04-10 00:35:48.007665 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-04-10 00:35:48.033835 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-04-10 00:35:48.033936 | orchestrator | Dload Upload Total Spent Left Speed 2026-04-10 00:35:48.110065 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 184 0 --:--:-- --:--:-- --:--:-- 186 2026-04-10 00:35:48.122288 | orchestrator | + osism apply --environment custom workarounds 2026-04-10 00:35:49.361515 | orchestrator | 2026-04-10 00:35:49 | INFO  | Trying to run play workarounds in environment custom 2026-04-10 00:35:59.442517 | orchestrator | 2026-04-10 00:35:59 | INFO  | Prepare task for execution of workarounds. 2026-04-10 00:35:59.537769 | orchestrator | 2026-04-10 00:35:59 | INFO  | Task 707c4b28-9e8a-4946-aaf0-4e7822c736f2 (workarounds) was prepared for execution. 2026-04-10 00:35:59.537871 | orchestrator | 2026-04-10 00:35:59 | INFO  | It takes a moment until task 707c4b28-9e8a-4946-aaf0-4e7822c736f2 (workarounds) has been started and output is visible here. 2026-04-10 00:36:23.801135 | orchestrator | 2026-04-10 00:36:23.801255 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-10 00:36:23.801273 | orchestrator | 2026-04-10 00:36:23.801285 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-04-10 00:36:23.801337 | orchestrator | Friday 10 April 2026 00:36:02 +0000 (0:00:00.173) 0:00:00.173 ********** 2026-04-10 00:36:23.801351 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-04-10 00:36:23.801363 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-04-10 00:36:23.801374 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-04-10 00:36:23.801386 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-04-10 00:36:23.801397 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-04-10 00:36:23.801408 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-04-10 00:36:23.801420 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-04-10 00:36:23.801431 | orchestrator | 2026-04-10 00:36:23.801442 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-04-10 00:36:23.801454 | orchestrator | 2026-04-10 00:36:23.801466 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-10 00:36:23.801477 | orchestrator | Friday 10 April 2026 00:36:03 +0000 (0:00:00.708) 0:00:00.881 ********** 2026-04-10 00:36:23.801489 | orchestrator | ok: [testbed-manager] 2026-04-10 00:36:23.801502 | orchestrator | 2026-04-10 00:36:23.801513 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-04-10 00:36:23.801524 | orchestrator | 2026-04-10 00:36:23.801535 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-10 00:36:23.801547 | orchestrator | Friday 10 April 2026 00:36:05 +0000 (0:00:02.503) 0:00:03.384 ********** 2026-04-10 00:36:23.801558 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:36:23.801569 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:36:23.801580 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:36:23.801592 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:36:23.801603 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:36:23.801614 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:36:23.801625 | orchestrator | 2026-04-10 00:36:23.801636 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-04-10 00:36:23.801647 | orchestrator | 2026-04-10 00:36:23.801659 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-04-10 00:36:23.801671 | orchestrator | Friday 10 April 2026 00:36:08 +0000 (0:00:02.348) 0:00:05.733 ********** 2026-04-10 00:36:23.801702 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-10 00:36:23.801716 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-10 00:36:23.801729 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-10 00:36:23.801763 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-10 00:36:23.801776 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-10 00:36:23.801789 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-10 00:36:23.801801 | orchestrator | 2026-04-10 00:36:23.801814 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-04-10 00:36:23.801826 | orchestrator | Friday 10 April 2026 00:36:09 +0000 (0:00:01.295) 0:00:07.029 ********** 2026-04-10 00:36:23.801839 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:36:23.801852 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:36:23.801864 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:36:23.801877 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:36:23.801889 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:36:23.801902 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:36:23.801914 | orchestrator | 2026-04-10 00:36:23.801927 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-04-10 00:36:23.801940 | orchestrator | Friday 10 April 2026 00:36:13 +0000 (0:00:03.885) 0:00:10.914 ********** 2026-04-10 00:36:23.801952 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:36:23.801965 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:36:23.801978 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:36:23.801991 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:36:23.802004 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:36:23.802072 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:36:23.802088 | orchestrator | 2026-04-10 00:36:23.802099 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-04-10 00:36:23.802110 | orchestrator | 2026-04-10 00:36:23.802122 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-04-10 00:36:23.802133 | orchestrator | Friday 10 April 2026 00:36:13 +0000 (0:00:00.470) 0:00:11.385 ********** 2026-04-10 00:36:23.802144 | orchestrator | changed: [testbed-manager] 2026-04-10 00:36:23.802155 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:36:23.802166 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:36:23.802178 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:36:23.802189 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:36:23.802200 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:36:23.802211 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:36:23.802222 | orchestrator | 2026-04-10 00:36:23.802233 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-04-10 00:36:23.802244 | orchestrator | Friday 10 April 2026 00:36:15 +0000 (0:00:01.691) 0:00:13.077 ********** 2026-04-10 00:36:23.802255 | orchestrator | changed: [testbed-manager] 2026-04-10 00:36:23.802266 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:36:23.802277 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:36:23.802289 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:36:23.802319 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:36:23.802330 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:36:23.802361 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:36:23.802373 | orchestrator | 2026-04-10 00:36:23.802384 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-04-10 00:36:23.802396 | orchestrator | Friday 10 April 2026 00:36:17 +0000 (0:00:01.494) 0:00:14.571 ********** 2026-04-10 00:36:23.802407 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:36:23.802418 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:36:23.802429 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:36:23.802440 | orchestrator | ok: [testbed-manager] 2026-04-10 00:36:23.802451 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:36:23.802462 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:36:23.802473 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:36:23.802484 | orchestrator | 2026-04-10 00:36:23.802506 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-04-10 00:36:23.802517 | orchestrator | Friday 10 April 2026 00:36:18 +0000 (0:00:01.787) 0:00:16.359 ********** 2026-04-10 00:36:23.802528 | orchestrator | changed: [testbed-manager] 2026-04-10 00:36:23.802539 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:36:23.802550 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:36:23.802561 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:36:23.802572 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:36:23.802583 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:36:23.802594 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:36:23.802605 | orchestrator | 2026-04-10 00:36:23.802616 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-04-10 00:36:23.802628 | orchestrator | Friday 10 April 2026 00:36:20 +0000 (0:00:01.583) 0:00:17.942 ********** 2026-04-10 00:36:23.802639 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:36:23.802650 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:36:23.802661 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:36:23.802671 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:36:23.802682 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:36:23.802693 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:36:23.802704 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:36:23.802715 | orchestrator | 2026-04-10 00:36:23.802727 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-04-10 00:36:23.802738 | orchestrator | 2026-04-10 00:36:23.802749 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-04-10 00:36:23.802760 | orchestrator | Friday 10 April 2026 00:36:21 +0000 (0:00:00.717) 0:00:18.660 ********** 2026-04-10 00:36:23.802771 | orchestrator | ok: [testbed-manager] 2026-04-10 00:36:23.802782 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:36:23.802793 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:36:23.802804 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:36:23.802815 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:36:23.802826 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:36:23.802837 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:36:23.802848 | orchestrator | 2026-04-10 00:36:23.802865 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:36:23.802879 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-10 00:36:23.802892 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 00:36:23.802903 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 00:36:23.802914 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 00:36:23.802925 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 00:36:23.802936 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 00:36:23.802947 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 00:36:23.802958 | orchestrator | 2026-04-10 00:36:23.802969 | orchestrator | 2026-04-10 00:36:23.802981 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:36:23.802992 | orchestrator | Friday 10 April 2026 00:36:23 +0000 (0:00:02.637) 0:00:21.298 ********** 2026-04-10 00:36:23.803004 | orchestrator | =============================================================================== 2026-04-10 00:36:23.803023 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.89s 2026-04-10 00:36:23.803034 | orchestrator | Install python3-docker -------------------------------------------------- 2.64s 2026-04-10 00:36:23.803045 | orchestrator | Apply netplan configuration --------------------------------------------- 2.50s 2026-04-10 00:36:23.803056 | orchestrator | Apply netplan configuration --------------------------------------------- 2.35s 2026-04-10 00:36:23.803067 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.79s 2026-04-10 00:36:23.803078 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.69s 2026-04-10 00:36:23.803089 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.58s 2026-04-10 00:36:23.803100 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.49s 2026-04-10 00:36:23.803111 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.30s 2026-04-10 00:36:23.803122 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.72s 2026-04-10 00:36:23.803133 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.71s 2026-04-10 00:36:23.803150 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.47s 2026-04-10 00:36:24.253264 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-04-10 00:36:35.543854 | orchestrator | 2026-04-10 00:36:35 | INFO  | Prepare task for execution of reboot. 2026-04-10 00:36:35.611389 | orchestrator | 2026-04-10 00:36:35 | INFO  | Task f2b1bf48-664c-4593-80be-58baf6a52b66 (reboot) was prepared for execution. 2026-04-10 00:36:35.611490 | orchestrator | 2026-04-10 00:36:35 | INFO  | It takes a moment until task f2b1bf48-664c-4593-80be-58baf6a52b66 (reboot) has been started and output is visible here. 2026-04-10 00:36:46.753156 | orchestrator | 2026-04-10 00:36:46.753268 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-10 00:36:46.753285 | orchestrator | 2026-04-10 00:36:46.753347 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-10 00:36:46.753362 | orchestrator | Friday 10 April 2026 00:36:38 +0000 (0:00:00.242) 0:00:00.242 ********** 2026-04-10 00:36:46.753373 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:36:46.753386 | orchestrator | 2026-04-10 00:36:46.753397 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-10 00:36:46.753408 | orchestrator | Friday 10 April 2026 00:36:38 +0000 (0:00:00.144) 0:00:00.387 ********** 2026-04-10 00:36:46.753419 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:36:46.753430 | orchestrator | 2026-04-10 00:36:46.753441 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-10 00:36:46.753452 | orchestrator | Friday 10 April 2026 00:36:40 +0000 (0:00:01.295) 0:00:01.683 ********** 2026-04-10 00:36:46.753463 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:36:46.753473 | orchestrator | 2026-04-10 00:36:46.753484 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-10 00:36:46.753495 | orchestrator | 2026-04-10 00:36:46.753506 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-10 00:36:46.753517 | orchestrator | Friday 10 April 2026 00:36:40 +0000 (0:00:00.107) 0:00:01.791 ********** 2026-04-10 00:36:46.753527 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:36:46.753538 | orchestrator | 2026-04-10 00:36:46.753549 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-10 00:36:46.753560 | orchestrator | Friday 10 April 2026 00:36:40 +0000 (0:00:00.098) 0:00:01.889 ********** 2026-04-10 00:36:46.753571 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:36:46.753582 | orchestrator | 2026-04-10 00:36:46.753608 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-10 00:36:46.753619 | orchestrator | Friday 10 April 2026 00:36:41 +0000 (0:00:01.033) 0:00:02.922 ********** 2026-04-10 00:36:46.753630 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:36:46.753641 | orchestrator | 2026-04-10 00:36:46.753686 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-10 00:36:46.753708 | orchestrator | 2026-04-10 00:36:46.753728 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-10 00:36:46.753749 | orchestrator | Friday 10 April 2026 00:36:41 +0000 (0:00:00.112) 0:00:03.035 ********** 2026-04-10 00:36:46.753770 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:36:46.753791 | orchestrator | 2026-04-10 00:36:46.753811 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-10 00:36:46.753828 | orchestrator | Friday 10 April 2026 00:36:41 +0000 (0:00:00.095) 0:00:03.131 ********** 2026-04-10 00:36:46.753840 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:36:46.753853 | orchestrator | 2026-04-10 00:36:46.753866 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-10 00:36:46.753878 | orchestrator | Friday 10 April 2026 00:36:42 +0000 (0:00:01.029) 0:00:04.160 ********** 2026-04-10 00:36:46.753891 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:36:46.753904 | orchestrator | 2026-04-10 00:36:46.753914 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-10 00:36:46.753925 | orchestrator | 2026-04-10 00:36:46.753936 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-10 00:36:46.753947 | orchestrator | Friday 10 April 2026 00:36:42 +0000 (0:00:00.112) 0:00:04.273 ********** 2026-04-10 00:36:46.753958 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:36:46.753969 | orchestrator | 2026-04-10 00:36:46.753980 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-10 00:36:46.753991 | orchestrator | Friday 10 April 2026 00:36:42 +0000 (0:00:00.110) 0:00:04.383 ********** 2026-04-10 00:36:46.754002 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:36:46.754077 | orchestrator | 2026-04-10 00:36:46.754092 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-10 00:36:46.754104 | orchestrator | Friday 10 April 2026 00:36:43 +0000 (0:00:00.978) 0:00:05.361 ********** 2026-04-10 00:36:46.754115 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:36:46.754126 | orchestrator | 2026-04-10 00:36:46.754193 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-10 00:36:46.754215 | orchestrator | 2026-04-10 00:36:46.754233 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-10 00:36:46.754250 | orchestrator | Friday 10 April 2026 00:36:43 +0000 (0:00:00.107) 0:00:05.469 ********** 2026-04-10 00:36:46.754268 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:36:46.754284 | orchestrator | 2026-04-10 00:36:46.754368 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-10 00:36:46.754388 | orchestrator | Friday 10 April 2026 00:36:44 +0000 (0:00:00.217) 0:00:05.686 ********** 2026-04-10 00:36:46.754405 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:36:46.754422 | orchestrator | 2026-04-10 00:36:46.754441 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-10 00:36:46.754459 | orchestrator | Friday 10 April 2026 00:36:45 +0000 (0:00:01.006) 0:00:06.692 ********** 2026-04-10 00:36:46.754479 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:36:46.754497 | orchestrator | 2026-04-10 00:36:46.754516 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-10 00:36:46.754535 | orchestrator | 2026-04-10 00:36:46.754554 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-10 00:36:46.754571 | orchestrator | Friday 10 April 2026 00:36:45 +0000 (0:00:00.115) 0:00:06.808 ********** 2026-04-10 00:36:46.754591 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:36:46.754608 | orchestrator | 2026-04-10 00:36:46.754627 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-10 00:36:46.754645 | orchestrator | Friday 10 April 2026 00:36:45 +0000 (0:00:00.104) 0:00:06.913 ********** 2026-04-10 00:36:46.754665 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:36:46.754685 | orchestrator | 2026-04-10 00:36:46.754703 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-10 00:36:46.754743 | orchestrator | Friday 10 April 2026 00:36:46 +0000 (0:00:01.010) 0:00:07.923 ********** 2026-04-10 00:36:46.754789 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:36:46.754810 | orchestrator | 2026-04-10 00:36:46.754828 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:36:46.754848 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 00:36:46.754861 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 00:36:46.754872 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 00:36:46.754883 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 00:36:46.754894 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 00:36:46.754905 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 00:36:46.754916 | orchestrator | 2026-04-10 00:36:46.754927 | orchestrator | 2026-04-10 00:36:46.754938 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:36:46.754958 | orchestrator | Friday 10 April 2026 00:36:46 +0000 (0:00:00.030) 0:00:07.954 ********** 2026-04-10 00:36:46.754970 | orchestrator | =============================================================================== 2026-04-10 00:36:46.754981 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 6.35s 2026-04-10 00:36:46.754991 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.77s 2026-04-10 00:36:46.755002 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.59s 2026-04-10 00:36:46.934563 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-04-10 00:36:58.276466 | orchestrator | 2026-04-10 00:36:58 | INFO  | Prepare task for execution of wait-for-connection. 2026-04-10 00:36:58.345612 | orchestrator | 2026-04-10 00:36:58 | INFO  | Task 164ca132-cbd5-4382-ad6b-4ddd8dc7b0f3 (wait-for-connection) was prepared for execution. 2026-04-10 00:36:58.345686 | orchestrator | 2026-04-10 00:36:58 | INFO  | It takes a moment until task 164ca132-cbd5-4382-ad6b-4ddd8dc7b0f3 (wait-for-connection) has been started and output is visible here. 2026-04-10 00:37:12.858180 | orchestrator | 2026-04-10 00:37:12.858359 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-04-10 00:37:12.858391 | orchestrator | 2026-04-10 00:37:12.858411 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-04-10 00:37:12.858432 | orchestrator | Friday 10 April 2026 00:37:01 +0000 (0:00:00.228) 0:00:00.228 ********** 2026-04-10 00:37:12.858452 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:37:12.858473 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:37:12.858487 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:37:12.858499 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:37:12.858510 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:37:12.858523 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:37:12.858534 | orchestrator | 2026-04-10 00:37:12.858545 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:37:12.858558 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:37:12.858571 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:37:12.858612 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:37:12.858624 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:37:12.858635 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:37:12.858647 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:37:12.858659 | orchestrator | 2026-04-10 00:37:12.858672 | orchestrator | 2026-04-10 00:37:12.858685 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:37:12.858697 | orchestrator | Friday 10 April 2026 00:37:12 +0000 (0:00:11.579) 0:00:11.807 ********** 2026-04-10 00:37:12.858709 | orchestrator | =============================================================================== 2026-04-10 00:37:12.858722 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.58s 2026-04-10 00:37:12.969280 | orchestrator | + osism apply hddtemp 2026-04-10 00:37:24.114885 | orchestrator | 2026-04-10 00:37:24 | INFO  | Prepare task for execution of hddtemp. 2026-04-10 00:37:24.185759 | orchestrator | 2026-04-10 00:37:24 | INFO  | Task ef273701-72be-46f6-8e6c-a75af90c84e7 (hddtemp) was prepared for execution. 2026-04-10 00:37:24.185859 | orchestrator | 2026-04-10 00:37:24 | INFO  | It takes a moment until task ef273701-72be-46f6-8e6c-a75af90c84e7 (hddtemp) has been started and output is visible here. 2026-04-10 00:37:50.096647 | orchestrator | 2026-04-10 00:37:50.096760 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-04-10 00:37:50.096776 | orchestrator | 2026-04-10 00:37:50.096789 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-04-10 00:37:50.096801 | orchestrator | Friday 10 April 2026 00:37:27 +0000 (0:00:00.237) 0:00:00.237 ********** 2026-04-10 00:37:50.096812 | orchestrator | ok: [testbed-manager] 2026-04-10 00:37:50.096825 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:37:50.096836 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:37:50.096848 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:37:50.096859 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:37:50.096870 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:37:50.096881 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:37:50.096947 | orchestrator | 2026-04-10 00:37:50.096960 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-04-10 00:37:50.096971 | orchestrator | Friday 10 April 2026 00:37:27 +0000 (0:00:00.453) 0:00:00.691 ********** 2026-04-10 00:37:50.096983 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:37:50.096997 | orchestrator | 2026-04-10 00:37:50.097009 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-04-10 00:37:50.097020 | orchestrator | Friday 10 April 2026 00:37:28 +0000 (0:00:00.941) 0:00:01.633 ********** 2026-04-10 00:37:50.097031 | orchestrator | ok: [testbed-manager] 2026-04-10 00:37:50.097042 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:37:50.097070 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:37:50.097081 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:37:50.097092 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:37:50.097104 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:37:50.097114 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:37:50.097125 | orchestrator | 2026-04-10 00:37:50.097137 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-04-10 00:37:50.097148 | orchestrator | Friday 10 April 2026 00:37:30 +0000 (0:00:02.294) 0:00:03.927 ********** 2026-04-10 00:37:50.097159 | orchestrator | changed: [testbed-manager] 2026-04-10 00:37:50.097172 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:37:50.097205 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:37:50.097218 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:37:50.097231 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:37:50.097244 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:37:50.097256 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:37:50.097268 | orchestrator | 2026-04-10 00:37:50.097281 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-04-10 00:37:50.097318 | orchestrator | Friday 10 April 2026 00:37:31 +0000 (0:00:00.846) 0:00:04.773 ********** 2026-04-10 00:37:50.097332 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:37:50.097344 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:37:50.097357 | orchestrator | ok: [testbed-manager] 2026-04-10 00:37:50.097370 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:37:50.097382 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:37:50.097395 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:37:50.097407 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:37:50.097420 | orchestrator | 2026-04-10 00:37:50.097433 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-04-10 00:37:50.097446 | orchestrator | Friday 10 April 2026 00:37:32 +0000 (0:00:01.170) 0:00:05.944 ********** 2026-04-10 00:37:50.097459 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:37:50.097471 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:37:50.097484 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:37:50.097497 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:37:50.097509 | orchestrator | changed: [testbed-manager] 2026-04-10 00:37:50.097522 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:37:50.097535 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:37:50.097547 | orchestrator | 2026-04-10 00:37:50.097558 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-04-10 00:37:50.097569 | orchestrator | Friday 10 April 2026 00:37:33 +0000 (0:00:00.529) 0:00:06.473 ********** 2026-04-10 00:37:50.097580 | orchestrator | changed: [testbed-manager] 2026-04-10 00:37:50.097590 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:37:50.097601 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:37:50.097612 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:37:50.097622 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:37:50.097633 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:37:50.097645 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:37:50.097656 | orchestrator | 2026-04-10 00:37:50.097667 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-04-10 00:37:50.097678 | orchestrator | Friday 10 April 2026 00:37:46 +0000 (0:00:13.457) 0:00:19.931 ********** 2026-04-10 00:37:50.097689 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:37:50.097701 | orchestrator | 2026-04-10 00:37:50.097712 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-04-10 00:37:50.097722 | orchestrator | Friday 10 April 2026 00:37:47 +0000 (0:00:01.132) 0:00:21.063 ********** 2026-04-10 00:37:50.097733 | orchestrator | changed: [testbed-manager] 2026-04-10 00:37:50.097773 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:37:50.097785 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:37:50.097795 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:37:50.097806 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:37:50.097817 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:37:50.097828 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:37:50.097838 | orchestrator | 2026-04-10 00:37:50.097849 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:37:50.097860 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:37:50.097891 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-10 00:37:50.097913 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-10 00:37:50.097924 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-10 00:37:50.097935 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-10 00:37:50.097945 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-10 00:37:50.097957 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-10 00:37:50.097967 | orchestrator | 2026-04-10 00:37:50.097978 | orchestrator | 2026-04-10 00:37:50.097990 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:37:50.098001 | orchestrator | Friday 10 April 2026 00:37:49 +0000 (0:00:01.859) 0:00:22.922 ********** 2026-04-10 00:37:50.098012 | orchestrator | =============================================================================== 2026-04-10 00:37:50.098083 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.46s 2026-04-10 00:37:50.098095 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.29s 2026-04-10 00:37:50.098106 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.86s 2026-04-10 00:37:50.098117 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.17s 2026-04-10 00:37:50.098128 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.13s 2026-04-10 00:37:50.098139 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 0.94s 2026-04-10 00:37:50.098150 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 0.85s 2026-04-10 00:37:50.098160 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.53s 2026-04-10 00:37:50.098171 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.45s 2026-04-10 00:37:50.265073 | orchestrator | ++ semver latest 7.1.1 2026-04-10 00:37:50.319853 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-10 00:37:50.320017 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-10 00:37:50.320037 | orchestrator | + sudo systemctl restart manager.service 2026-04-10 00:38:04.051791 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-10 00:38:04.051877 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-10 00:38:04.051887 | orchestrator | + local max_attempts=60 2026-04-10 00:38:04.051896 | orchestrator | + local name=ceph-ansible 2026-04-10 00:38:04.051903 | orchestrator | + local attempt_num=1 2026-04-10 00:38:04.051910 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-10 00:38:04.090063 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-10 00:38:04.090148 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-10 00:38:04.090159 | orchestrator | + sleep 5 2026-04-10 00:38:09.095884 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-10 00:38:09.151228 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-10 00:38:09.151408 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-10 00:38:09.151431 | orchestrator | + sleep 5 2026-04-10 00:38:14.154743 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-10 00:38:14.188248 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-10 00:38:14.188374 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-10 00:38:14.188391 | orchestrator | + sleep 5 2026-04-10 00:38:19.192335 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-10 00:38:19.231155 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-10 00:38:19.231252 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-10 00:38:19.231267 | orchestrator | + sleep 5 2026-04-10 00:38:24.235417 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-10 00:38:24.276935 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-10 00:38:24.277020 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-10 00:38:24.277033 | orchestrator | + sleep 5 2026-04-10 00:38:29.281499 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-10 00:38:29.320062 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-10 00:38:29.320157 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-10 00:38:29.320172 | orchestrator | + sleep 5 2026-04-10 00:38:34.325278 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-10 00:38:34.364372 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-10 00:38:34.364466 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-10 00:38:34.364481 | orchestrator | + sleep 5 2026-04-10 00:38:39.368592 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-10 00:38:39.424420 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-10 00:38:39.424546 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-10 00:38:39.424572 | orchestrator | + sleep 5 2026-04-10 00:38:44.427592 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-10 00:38:44.462455 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-10 00:38:44.462563 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-10 00:38:44.462578 | orchestrator | + sleep 5 2026-04-10 00:38:49.466733 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-10 00:38:49.507923 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-10 00:38:49.508020 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-10 00:38:49.508034 | orchestrator | + sleep 5 2026-04-10 00:38:54.513424 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-10 00:38:54.548800 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-10 00:38:54.548888 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-10 00:38:54.548905 | orchestrator | + sleep 5 2026-04-10 00:38:59.552365 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-10 00:38:59.581699 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-10 00:38:59.581796 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-10 00:38:59.581812 | orchestrator | + sleep 5 2026-04-10 00:39:04.585750 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-10 00:39:04.624838 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-10 00:39:04.624971 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-10 00:39:04.624998 | orchestrator | + sleep 5 2026-04-10 00:39:09.628917 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-10 00:39:09.661811 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-10 00:39:09.661883 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-10 00:39:09.661895 | orchestrator | + local max_attempts=60 2026-04-10 00:39:09.661906 | orchestrator | + local name=kolla-ansible 2026-04-10 00:39:09.661915 | orchestrator | + local attempt_num=1 2026-04-10 00:39:09.662274 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-10 00:39:09.690906 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-10 00:39:09.690976 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-10 00:39:09.690990 | orchestrator | + local max_attempts=60 2026-04-10 00:39:09.691003 | orchestrator | + local name=osism-ansible 2026-04-10 00:39:09.691015 | orchestrator | + local attempt_num=1 2026-04-10 00:39:09.691250 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-10 00:39:09.722557 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-10 00:39:09.722652 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-10 00:39:09.722674 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-10 00:39:09.857189 | orchestrator | ARA in ceph-ansible already disabled. 2026-04-10 00:39:09.977645 | orchestrator | ARA in kolla-ansible already disabled. 2026-04-10 00:39:10.101791 | orchestrator | ARA in osism-ansible already disabled. 2026-04-10 00:39:10.241906 | orchestrator | ARA in osism-kubernetes already disabled. 2026-04-10 00:39:10.241998 | orchestrator | + osism apply gather-facts 2026-04-10 00:39:21.325363 | orchestrator | 2026-04-10 00:39:21 | INFO  | Prepare task for execution of gather-facts. 2026-04-10 00:39:21.394404 | orchestrator | 2026-04-10 00:39:21 | INFO  | Task 74a6a819-b5fa-4cf5-8ca1-ecc7fdbe195b (gather-facts) was prepared for execution. 2026-04-10 00:39:21.394540 | orchestrator | 2026-04-10 00:39:21 | INFO  | It takes a moment until task 74a6a819-b5fa-4cf5-8ca1-ecc7fdbe195b (gather-facts) has been started and output is visible here. 2026-04-10 00:39:34.266737 | orchestrator | 2026-04-10 00:39:34.266824 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-10 00:39:34.266835 | orchestrator | 2026-04-10 00:39:34.266841 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-10 00:39:34.266848 | orchestrator | Friday 10 April 2026 00:39:24 +0000 (0:00:00.248) 0:00:00.248 ********** 2026-04-10 00:39:34.266854 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:39:34.266861 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:39:34.266867 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:39:34.266873 | orchestrator | ok: [testbed-manager] 2026-04-10 00:39:34.266880 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:39:34.266886 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:39:34.266891 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:39:34.266897 | orchestrator | 2026-04-10 00:39:34.266904 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-10 00:39:34.266910 | orchestrator | 2026-04-10 00:39:34.266916 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-10 00:39:34.266922 | orchestrator | Friday 10 April 2026 00:39:33 +0000 (0:00:09.246) 0:00:09.495 ********** 2026-04-10 00:39:34.266928 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:39:34.266935 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:39:34.266941 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:39:34.266947 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:39:34.266953 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:39:34.266959 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:39:34.266965 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:39:34.266970 | orchestrator | 2026-04-10 00:39:34.266976 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:39:34.266983 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-10 00:39:34.266990 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-10 00:39:34.266996 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-10 00:39:34.267002 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-10 00:39:34.267008 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-10 00:39:34.267014 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-10 00:39:34.267020 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-10 00:39:34.267026 | orchestrator | 2026-04-10 00:39:34.267032 | orchestrator | 2026-04-10 00:39:34.267038 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:39:34.267044 | orchestrator | Friday 10 April 2026 00:39:34 +0000 (0:00:00.527) 0:00:10.022 ********** 2026-04-10 00:39:34.267049 | orchestrator | =============================================================================== 2026-04-10 00:39:34.267055 | orchestrator | Gathers facts about hosts ----------------------------------------------- 9.25s 2026-04-10 00:39:34.267061 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2026-04-10 00:39:34.379452 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-04-10 00:39:34.388719 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-04-10 00:39:34.404730 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-04-10 00:39:34.414872 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-04-10 00:39:34.424646 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-04-10 00:39:34.434123 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-04-10 00:39:34.443273 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-04-10 00:39:34.452352 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-04-10 00:39:34.460435 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-04-10 00:39:34.467704 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-04-10 00:39:34.475686 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-04-10 00:39:34.483648 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-04-10 00:39:34.493057 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-04-10 00:39:34.501428 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-04-10 00:39:34.510690 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-04-10 00:39:34.521114 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-04-10 00:39:34.530605 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-04-10 00:39:34.540359 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-04-10 00:39:34.548515 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-04-10 00:39:34.556807 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amphora-image.sh /usr/local/bin/bootstrap-octavia 2026-04-10 00:39:34.566323 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-04-10 00:39:34.574750 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-04-10 00:39:34.582485 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-04-10 00:39:34.591252 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-10 00:39:35.061006 | orchestrator | ok: Runtime: 0:24:04.482097 2026-04-10 00:39:35.182058 | 2026-04-10 00:39:35.182204 | TASK [Deploy services] 2026-04-10 00:39:35.716931 | orchestrator | skipping: Conditional result was False 2026-04-10 00:39:35.726528 | 2026-04-10 00:39:35.726752 | TASK [Deploy in a nutshell] 2026-04-10 00:39:36.379204 | orchestrator | 2026-04-10 00:39:36.379350 | orchestrator | # PULL IMAGES 2026-04-10 00:39:36.379361 | orchestrator | 2026-04-10 00:39:36.379366 | orchestrator | + set -e 2026-04-10 00:39:36.379373 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-10 00:39:36.379382 | orchestrator | ++ export INTERACTIVE=false 2026-04-10 00:39:36.379389 | orchestrator | ++ INTERACTIVE=false 2026-04-10 00:39:36.379410 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-10 00:39:36.379420 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-10 00:39:36.379427 | orchestrator | + source /opt/manager-vars.sh 2026-04-10 00:39:36.379432 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-10 00:39:36.379439 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-10 00:39:36.379444 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-10 00:39:36.379451 | orchestrator | ++ CEPH_VERSION=reef 2026-04-10 00:39:36.379456 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-10 00:39:36.379463 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-10 00:39:36.379467 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-10 00:39:36.379474 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-10 00:39:36.379479 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-10 00:39:36.379483 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-10 00:39:36.379487 | orchestrator | ++ export ARA=false 2026-04-10 00:39:36.379491 | orchestrator | ++ ARA=false 2026-04-10 00:39:36.379495 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-10 00:39:36.379499 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-10 00:39:36.379503 | orchestrator | ++ export TEMPEST=true 2026-04-10 00:39:36.379506 | orchestrator | ++ TEMPEST=true 2026-04-10 00:39:36.379510 | orchestrator | ++ export IS_ZUUL=true 2026-04-10 00:39:36.379514 | orchestrator | ++ IS_ZUUL=true 2026-04-10 00:39:36.379518 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.34 2026-04-10 00:39:36.379522 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.34 2026-04-10 00:39:36.379526 | orchestrator | ++ export EXTERNAL_API=false 2026-04-10 00:39:36.379529 | orchestrator | ++ EXTERNAL_API=false 2026-04-10 00:39:36.379533 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-10 00:39:36.379537 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-10 00:39:36.379541 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-10 00:39:36.379545 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-10 00:39:36.379549 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-10 00:39:36.379553 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-10 00:39:36.379557 | orchestrator | + echo 2026-04-10 00:39:36.379561 | orchestrator | + echo '# PULL IMAGES' 2026-04-10 00:39:36.379565 | orchestrator | + echo 2026-04-10 00:39:36.379576 | orchestrator | ++ semver latest 7.0.0 2026-04-10 00:39:36.419602 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-10 00:39:36.419680 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-10 00:39:36.419686 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-04-10 00:39:37.483640 | orchestrator | 2026-04-10 00:39:37 | INFO  | Trying to run play pull-images in environment custom 2026-04-10 00:39:47.571069 | orchestrator | 2026-04-10 00:39:47 | INFO  | Prepare task for execution of pull-images. 2026-04-10 00:39:47.644503 | orchestrator | 2026-04-10 00:39:47 | INFO  | Task 2a3d5351-e43f-4a13-86af-1cef8873058a (pull-images) was prepared for execution. 2026-04-10 00:39:47.644574 | orchestrator | 2026-04-10 00:39:47 | INFO  | Task 2a3d5351-e43f-4a13-86af-1cef8873058a is running in background. No more output. Check ARA for logs. 2026-04-10 00:39:48.960678 | orchestrator | 2026-04-10 00:39:48 | INFO  | Trying to run play wipe-partitions in environment custom 2026-04-10 00:39:59.037463 | orchestrator | 2026-04-10 00:39:59 | INFO  | Prepare task for execution of wipe-partitions. 2026-04-10 00:39:59.112695 | orchestrator | 2026-04-10 00:39:59 | INFO  | Task 13f0fc63-eb23-4dc8-b7d8-1d0a7a10a75d (wipe-partitions) was prepared for execution. 2026-04-10 00:39:59.112803 | orchestrator | 2026-04-10 00:39:59 | INFO  | It takes a moment until task 13f0fc63-eb23-4dc8-b7d8-1d0a7a10a75d (wipe-partitions) has been started and output is visible here. 2026-04-10 00:40:10.458432 | orchestrator | 2026-04-10 00:40:10.458517 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-04-10 00:40:10.458526 | orchestrator | 2026-04-10 00:40:10.458530 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-04-10 00:40:10.458538 | orchestrator | Friday 10 April 2026 00:40:02 +0000 (0:00:00.156) 0:00:00.156 ********** 2026-04-10 00:40:10.458565 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:40:10.458571 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:40:10.458575 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:40:10.458579 | orchestrator | 2026-04-10 00:40:10.458583 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-04-10 00:40:10.458588 | orchestrator | Friday 10 April 2026 00:40:03 +0000 (0:00:00.989) 0:00:01.145 ********** 2026-04-10 00:40:10.458594 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:40:10.458598 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:40:10.458602 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:40:10.458605 | orchestrator | 2026-04-10 00:40:10.458610 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-04-10 00:40:10.458614 | orchestrator | Friday 10 April 2026 00:40:03 +0000 (0:00:00.235) 0:00:01.381 ********** 2026-04-10 00:40:10.458618 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:40:10.458623 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:40:10.458627 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:40:10.458630 | orchestrator | 2026-04-10 00:40:10.458634 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-04-10 00:40:10.458638 | orchestrator | Friday 10 April 2026 00:40:03 +0000 (0:00:00.533) 0:00:01.915 ********** 2026-04-10 00:40:10.458641 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:40:10.458645 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:40:10.458649 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:40:10.458653 | orchestrator | 2026-04-10 00:40:10.458657 | orchestrator | TASK [Check device availability] *********************************************** 2026-04-10 00:40:10.458660 | orchestrator | Friday 10 April 2026 00:40:04 +0000 (0:00:00.240) 0:00:02.156 ********** 2026-04-10 00:40:10.458664 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-10 00:40:10.458670 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-10 00:40:10.458674 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-10 00:40:10.458678 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-10 00:40:10.458682 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-10 00:40:10.458685 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-10 00:40:10.458689 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-10 00:40:10.458693 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-10 00:40:10.458697 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-10 00:40:10.458701 | orchestrator | 2026-04-10 00:40:10.458705 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-04-10 00:40:10.458708 | orchestrator | Friday 10 April 2026 00:40:05 +0000 (0:00:01.438) 0:00:03.594 ********** 2026-04-10 00:40:10.458712 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-04-10 00:40:10.458716 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-04-10 00:40:10.458720 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-04-10 00:40:10.458724 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-04-10 00:40:10.458727 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-04-10 00:40:10.458731 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-04-10 00:40:10.458735 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-04-10 00:40:10.458739 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-04-10 00:40:10.458742 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-04-10 00:40:10.458746 | orchestrator | 2026-04-10 00:40:10.458750 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-04-10 00:40:10.458754 | orchestrator | Friday 10 April 2026 00:40:06 +0000 (0:00:01.406) 0:00:05.001 ********** 2026-04-10 00:40:10.458758 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-10 00:40:10.458761 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-10 00:40:10.458765 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-10 00:40:10.458775 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-10 00:40:10.458783 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-10 00:40:10.458787 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-10 00:40:10.458791 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-10 00:40:10.458794 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-10 00:40:10.458798 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-10 00:40:10.458802 | orchestrator | 2026-04-10 00:40:10.458806 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-04-10 00:40:10.458810 | orchestrator | Friday 10 April 2026 00:40:09 +0000 (0:00:02.173) 0:00:07.174 ********** 2026-04-10 00:40:10.458814 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:40:10.458818 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:40:10.458821 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:40:10.458825 | orchestrator | 2026-04-10 00:40:10.458829 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-04-10 00:40:10.458833 | orchestrator | Friday 10 April 2026 00:40:09 +0000 (0:00:00.547) 0:00:07.722 ********** 2026-04-10 00:40:10.458837 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:40:10.458841 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:40:10.458844 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:40:10.458848 | orchestrator | 2026-04-10 00:40:10.458852 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:40:10.458857 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 00:40:10.458862 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 00:40:10.458877 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 00:40:10.458881 | orchestrator | 2026-04-10 00:40:10.458885 | orchestrator | 2026-04-10 00:40:10.458889 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:40:10.458893 | orchestrator | Friday 10 April 2026 00:40:10 +0000 (0:00:00.723) 0:00:08.445 ********** 2026-04-10 00:40:10.458897 | orchestrator | =============================================================================== 2026-04-10 00:40:10.458900 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.17s 2026-04-10 00:40:10.458904 | orchestrator | Check device availability ----------------------------------------------- 1.44s 2026-04-10 00:40:10.458910 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.41s 2026-04-10 00:40:10.458915 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.99s 2026-04-10 00:40:10.458921 | orchestrator | Request device events from the kernel ----------------------------------- 0.72s 2026-04-10 00:40:10.458927 | orchestrator | Reload udev rules ------------------------------------------------------- 0.55s 2026-04-10 00:40:10.458934 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.53s 2026-04-10 00:40:10.458940 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.24s 2026-04-10 00:40:10.458946 | orchestrator | Remove all rook related logical devices --------------------------------- 0.24s 2026-04-10 00:40:21.696485 | orchestrator | 2026-04-10 00:40:21 | INFO  | Prepare task for execution of facts. 2026-04-10 00:40:21.771430 | orchestrator | 2026-04-10 00:40:21 | INFO  | Task 9a2c9104-cef3-40e8-a242-c29ef8322f53 (facts) was prepared for execution. 2026-04-10 00:40:21.771540 | orchestrator | 2026-04-10 00:40:21 | INFO  | It takes a moment until task 9a2c9104-cef3-40e8-a242-c29ef8322f53 (facts) has been started and output is visible here. 2026-04-10 00:40:34.232416 | orchestrator | 2026-04-10 00:40:34.232518 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-10 00:40:34.232533 | orchestrator | 2026-04-10 00:40:34.232567 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-10 00:40:34.232578 | orchestrator | Friday 10 April 2026 00:40:25 +0000 (0:00:00.341) 0:00:00.341 ********** 2026-04-10 00:40:34.232587 | orchestrator | ok: [testbed-manager] 2026-04-10 00:40:34.232596 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:40:34.232605 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:40:34.232614 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:40:34.232622 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:40:34.232631 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:40:34.232639 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:40:34.232648 | orchestrator | 2026-04-10 00:40:34.232674 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-10 00:40:34.232690 | orchestrator | Friday 10 April 2026 00:40:26 +0000 (0:00:01.292) 0:00:01.634 ********** 2026-04-10 00:40:34.232706 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:40:34.232721 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:40:34.232735 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:40:34.232751 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:40:34.232765 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:40:34.232781 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:40:34.232796 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:40:34.232811 | orchestrator | 2026-04-10 00:40:34.232820 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-10 00:40:34.232829 | orchestrator | 2026-04-10 00:40:34.232838 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-10 00:40:34.232847 | orchestrator | Friday 10 April 2026 00:40:27 +0000 (0:00:01.173) 0:00:02.808 ********** 2026-04-10 00:40:34.232857 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:40:34.232865 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:40:34.232874 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:40:34.232883 | orchestrator | ok: [testbed-manager] 2026-04-10 00:40:34.232892 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:40:34.232900 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:40:34.232909 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:40:34.232918 | orchestrator | 2026-04-10 00:40:34.232927 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-10 00:40:34.232935 | orchestrator | 2026-04-10 00:40:34.232944 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-10 00:40:34.232954 | orchestrator | Friday 10 April 2026 00:40:33 +0000 (0:00:05.871) 0:00:08.679 ********** 2026-04-10 00:40:34.232962 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:40:34.232971 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:40:34.232980 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:40:34.232989 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:40:34.232998 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:40:34.233006 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:40:34.233015 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:40:34.233024 | orchestrator | 2026-04-10 00:40:34.233033 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:40:34.233042 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 00:40:34.233053 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 00:40:34.233062 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 00:40:34.233070 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 00:40:34.233079 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 00:40:34.233099 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 00:40:34.233108 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 00:40:34.233117 | orchestrator | 2026-04-10 00:40:34.233126 | orchestrator | 2026-04-10 00:40:34.233134 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:40:34.233143 | orchestrator | Friday 10 April 2026 00:40:33 +0000 (0:00:00.504) 0:00:09.183 ********** 2026-04-10 00:40:34.233152 | orchestrator | =============================================================================== 2026-04-10 00:40:34.233161 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.87s 2026-04-10 00:40:34.233169 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.29s 2026-04-10 00:40:34.233178 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.17s 2026-04-10 00:40:34.233187 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2026-04-10 00:40:35.711516 | orchestrator | 2026-04-10 00:40:35 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-04-10 00:40:35.777453 | orchestrator | 2026-04-10 00:40:35 | INFO  | Task ccfc623b-3a91-42de-baae-b9b15eead243 (ceph-configure-lvm-volumes) was prepared for execution. 2026-04-10 00:40:35.777550 | orchestrator | 2026-04-10 00:40:35 | INFO  | It takes a moment until task ccfc623b-3a91-42de-baae-b9b15eead243 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-04-10 00:40:46.345700 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-10 00:40:46.345799 | orchestrator | 2.16.14 2026-04-10 00:40:46.345814 | orchestrator | 2026-04-10 00:40:46.345834 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-10 00:40:46.345845 | orchestrator | 2026-04-10 00:40:46.345854 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-10 00:40:46.345863 | orchestrator | Friday 10 April 2026 00:40:40 +0000 (0:00:00.279) 0:00:00.279 ********** 2026-04-10 00:40:46.345873 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-10 00:40:46.345883 | orchestrator | 2026-04-10 00:40:46.345891 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-10 00:40:46.345900 | orchestrator | Friday 10 April 2026 00:40:40 +0000 (0:00:00.209) 0:00:00.488 ********** 2026-04-10 00:40:46.345910 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:40:46.345919 | orchestrator | 2026-04-10 00:40:46.345928 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:40:46.345937 | orchestrator | Friday 10 April 2026 00:40:40 +0000 (0:00:00.186) 0:00:00.674 ********** 2026-04-10 00:40:46.345946 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-10 00:40:46.345954 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-10 00:40:46.345963 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-10 00:40:46.345971 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-10 00:40:46.345980 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-10 00:40:46.345988 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-10 00:40:46.345997 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-10 00:40:46.346006 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-10 00:40:46.346062 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-10 00:40:46.346074 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-10 00:40:46.346103 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-10 00:40:46.346112 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-10 00:40:46.346121 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-10 00:40:46.346130 | orchestrator | 2026-04-10 00:40:46.346138 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:40:46.346147 | orchestrator | Friday 10 April 2026 00:40:40 +0000 (0:00:00.328) 0:00:01.003 ********** 2026-04-10 00:40:46.346156 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:40:46.346165 | orchestrator | 2026-04-10 00:40:46.346173 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:40:46.346182 | orchestrator | Friday 10 April 2026 00:40:41 +0000 (0:00:00.389) 0:00:01.392 ********** 2026-04-10 00:40:46.346191 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:40:46.346199 | orchestrator | 2026-04-10 00:40:46.346208 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:40:46.346221 | orchestrator | Friday 10 April 2026 00:40:41 +0000 (0:00:00.163) 0:00:01.556 ********** 2026-04-10 00:40:46.346231 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:40:46.346241 | orchestrator | 2026-04-10 00:40:46.346250 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:40:46.346261 | orchestrator | Friday 10 April 2026 00:40:41 +0000 (0:00:00.174) 0:00:01.730 ********** 2026-04-10 00:40:46.346271 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:40:46.346337 | orchestrator | 2026-04-10 00:40:46.346349 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:40:46.346359 | orchestrator | Friday 10 April 2026 00:40:41 +0000 (0:00:00.161) 0:00:01.892 ********** 2026-04-10 00:40:46.346369 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:40:46.346379 | orchestrator | 2026-04-10 00:40:46.346389 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:40:46.346399 | orchestrator | Friday 10 April 2026 00:40:41 +0000 (0:00:00.171) 0:00:02.063 ********** 2026-04-10 00:40:46.346409 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:40:46.346418 | orchestrator | 2026-04-10 00:40:46.346428 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:40:46.346438 | orchestrator | Friday 10 April 2026 00:40:42 +0000 (0:00:00.171) 0:00:02.235 ********** 2026-04-10 00:40:46.346448 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:40:46.346458 | orchestrator | 2026-04-10 00:40:46.346468 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:40:46.346479 | orchestrator | Friday 10 April 2026 00:40:42 +0000 (0:00:00.172) 0:00:02.408 ********** 2026-04-10 00:40:46.346489 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:40:46.346499 | orchestrator | 2026-04-10 00:40:46.346509 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:40:46.346518 | orchestrator | Friday 10 April 2026 00:40:42 +0000 (0:00:00.171) 0:00:02.579 ********** 2026-04-10 00:40:46.346529 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae) 2026-04-10 00:40:46.346540 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae) 2026-04-10 00:40:46.346550 | orchestrator | 2026-04-10 00:40:46.346560 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:40:46.346586 | orchestrator | Friday 10 April 2026 00:40:42 +0000 (0:00:00.348) 0:00:02.927 ********** 2026-04-10 00:40:46.346597 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7df1152f-d9d4-4643-860e-92853d20f14a) 2026-04-10 00:40:46.346608 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7df1152f-d9d4-4643-860e-92853d20f14a) 2026-04-10 00:40:46.346618 | orchestrator | 2026-04-10 00:40:46.346626 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:40:46.346643 | orchestrator | Friday 10 April 2026 00:40:43 +0000 (0:00:00.374) 0:00:03.302 ********** 2026-04-10 00:40:46.346651 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c799235e-1f4d-413e-847e-76a649e6822e) 2026-04-10 00:40:46.346660 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c799235e-1f4d-413e-847e-76a649e6822e) 2026-04-10 00:40:46.346669 | orchestrator | 2026-04-10 00:40:46.346678 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:40:46.346686 | orchestrator | Friday 10 April 2026 00:40:43 +0000 (0:00:00.499) 0:00:03.802 ********** 2026-04-10 00:40:46.346695 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_83ad9ae7-b217-4c7b-97e6-a7d535a7d755) 2026-04-10 00:40:46.346704 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_83ad9ae7-b217-4c7b-97e6-a7d535a7d755) 2026-04-10 00:40:46.346713 | orchestrator | 2026-04-10 00:40:46.346721 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:40:46.346730 | orchestrator | Friday 10 April 2026 00:40:44 +0000 (0:00:00.511) 0:00:04.313 ********** 2026-04-10 00:40:46.346739 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-10 00:40:46.346747 | orchestrator | 2026-04-10 00:40:46.346756 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:40:46.346765 | orchestrator | Friday 10 April 2026 00:40:44 +0000 (0:00:00.540) 0:00:04.853 ********** 2026-04-10 00:40:46.346779 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-10 00:40:46.346788 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-10 00:40:46.346796 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-10 00:40:46.346805 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-10 00:40:46.346814 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-10 00:40:46.346822 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-10 00:40:46.346831 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-10 00:40:46.346840 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-10 00:40:46.346848 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-10 00:40:46.346857 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-10 00:40:46.346866 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-10 00:40:46.346875 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-10 00:40:46.346883 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-10 00:40:46.346892 | orchestrator | 2026-04-10 00:40:46.346901 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:40:46.346909 | orchestrator | Friday 10 April 2026 00:40:45 +0000 (0:00:00.327) 0:00:05.180 ********** 2026-04-10 00:40:46.346918 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:40:46.346927 | orchestrator | 2026-04-10 00:40:46.346935 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:40:46.346944 | orchestrator | Friday 10 April 2026 00:40:45 +0000 (0:00:00.196) 0:00:05.377 ********** 2026-04-10 00:40:46.346953 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:40:46.346962 | orchestrator | 2026-04-10 00:40:46.346970 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:40:46.346979 | orchestrator | Friday 10 April 2026 00:40:45 +0000 (0:00:00.184) 0:00:05.561 ********** 2026-04-10 00:40:46.346988 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:40:46.347002 | orchestrator | 2026-04-10 00:40:46.347011 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:40:46.347019 | orchestrator | Friday 10 April 2026 00:40:45 +0000 (0:00:00.187) 0:00:05.748 ********** 2026-04-10 00:40:46.347028 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:40:46.347037 | orchestrator | 2026-04-10 00:40:46.347045 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:40:46.347054 | orchestrator | Friday 10 April 2026 00:40:45 +0000 (0:00:00.192) 0:00:05.941 ********** 2026-04-10 00:40:46.347063 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:40:46.347071 | orchestrator | 2026-04-10 00:40:46.347084 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:40:46.347093 | orchestrator | Friday 10 April 2026 00:40:45 +0000 (0:00:00.185) 0:00:06.126 ********** 2026-04-10 00:40:46.347101 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:40:46.347110 | orchestrator | 2026-04-10 00:40:46.347119 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:40:46.347128 | orchestrator | Friday 10 April 2026 00:40:46 +0000 (0:00:00.191) 0:00:06.318 ********** 2026-04-10 00:40:46.347136 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:40:46.347145 | orchestrator | 2026-04-10 00:40:46.347159 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:40:53.290369 | orchestrator | Friday 10 April 2026 00:40:46 +0000 (0:00:00.182) 0:00:06.500 ********** 2026-04-10 00:40:53.290479 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:40:53.290497 | orchestrator | 2026-04-10 00:40:53.290510 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:40:53.290521 | orchestrator | Friday 10 April 2026 00:40:46 +0000 (0:00:00.172) 0:00:06.672 ********** 2026-04-10 00:40:53.290533 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-10 00:40:53.290545 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-10 00:40:53.290556 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-10 00:40:53.290567 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-10 00:40:53.290578 | orchestrator | 2026-04-10 00:40:53.290589 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:40:53.290601 | orchestrator | Friday 10 April 2026 00:40:47 +0000 (0:00:00.915) 0:00:07.588 ********** 2026-04-10 00:40:53.290612 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:40:53.290623 | orchestrator | 2026-04-10 00:40:53.290634 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:40:53.290645 | orchestrator | Friday 10 April 2026 00:40:47 +0000 (0:00:00.190) 0:00:07.778 ********** 2026-04-10 00:40:53.290656 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:40:53.290667 | orchestrator | 2026-04-10 00:40:53.290678 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:40:53.290688 | orchestrator | Friday 10 April 2026 00:40:47 +0000 (0:00:00.174) 0:00:07.952 ********** 2026-04-10 00:40:53.290699 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:40:53.290710 | orchestrator | 2026-04-10 00:40:53.290721 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:40:53.290732 | orchestrator | Friday 10 April 2026 00:40:47 +0000 (0:00:00.197) 0:00:08.149 ********** 2026-04-10 00:40:53.290743 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:40:53.290754 | orchestrator | 2026-04-10 00:40:53.290765 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-10 00:40:53.290776 | orchestrator | Friday 10 April 2026 00:40:48 +0000 (0:00:00.183) 0:00:08.333 ********** 2026-04-10 00:40:53.290787 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-04-10 00:40:53.290798 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-04-10 00:40:53.290809 | orchestrator | 2026-04-10 00:40:53.290820 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-10 00:40:53.290831 | orchestrator | Friday 10 April 2026 00:40:48 +0000 (0:00:00.200) 0:00:08.533 ********** 2026-04-10 00:40:53.290867 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:40:53.290881 | orchestrator | 2026-04-10 00:40:53.290894 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-10 00:40:53.290907 | orchestrator | Friday 10 April 2026 00:40:48 +0000 (0:00:00.127) 0:00:08.661 ********** 2026-04-10 00:40:53.290919 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:40:53.290932 | orchestrator | 2026-04-10 00:40:53.290947 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-10 00:40:53.290958 | orchestrator | Friday 10 April 2026 00:40:48 +0000 (0:00:00.119) 0:00:08.780 ********** 2026-04-10 00:40:53.290969 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:40:53.290980 | orchestrator | 2026-04-10 00:40:53.290991 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-10 00:40:53.291002 | orchestrator | Friday 10 April 2026 00:40:48 +0000 (0:00:00.126) 0:00:08.907 ********** 2026-04-10 00:40:53.291013 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:40:53.291024 | orchestrator | 2026-04-10 00:40:53.291038 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-10 00:40:53.291056 | orchestrator | Friday 10 April 2026 00:40:48 +0000 (0:00:00.134) 0:00:09.042 ********** 2026-04-10 00:40:53.291082 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4a24d887-4b45-578e-8445-fe6f68cb2659'}}) 2026-04-10 00:40:53.291106 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '83f5954c-7956-54fb-af17-18f84b92edf0'}}) 2026-04-10 00:40:53.291122 | orchestrator | 2026-04-10 00:40:53.291138 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-10 00:40:53.291155 | orchestrator | Friday 10 April 2026 00:40:49 +0000 (0:00:00.154) 0:00:09.196 ********** 2026-04-10 00:40:53.291172 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4a24d887-4b45-578e-8445-fe6f68cb2659'}})  2026-04-10 00:40:53.291204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '83f5954c-7956-54fb-af17-18f84b92edf0'}})  2026-04-10 00:40:53.291220 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:40:53.291235 | orchestrator | 2026-04-10 00:40:53.291251 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-10 00:40:53.291266 | orchestrator | Friday 10 April 2026 00:40:49 +0000 (0:00:00.163) 0:00:09.359 ********** 2026-04-10 00:40:53.291338 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4a24d887-4b45-578e-8445-fe6f68cb2659'}})  2026-04-10 00:40:53.291358 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '83f5954c-7956-54fb-af17-18f84b92edf0'}})  2026-04-10 00:40:53.291376 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:40:53.291395 | orchestrator | 2026-04-10 00:40:53.291412 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-10 00:40:53.291430 | orchestrator | Friday 10 April 2026 00:40:49 +0000 (0:00:00.318) 0:00:09.678 ********** 2026-04-10 00:40:53.291442 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4a24d887-4b45-578e-8445-fe6f68cb2659'}})  2026-04-10 00:40:53.291474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '83f5954c-7956-54fb-af17-18f84b92edf0'}})  2026-04-10 00:40:53.291486 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:40:53.291496 | orchestrator | 2026-04-10 00:40:53.291507 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-10 00:40:53.291518 | orchestrator | Friday 10 April 2026 00:40:49 +0000 (0:00:00.144) 0:00:09.823 ********** 2026-04-10 00:40:53.291529 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:40:53.291539 | orchestrator | 2026-04-10 00:40:53.291550 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-10 00:40:53.291561 | orchestrator | Friday 10 April 2026 00:40:49 +0000 (0:00:00.119) 0:00:09.943 ********** 2026-04-10 00:40:53.291572 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:40:53.291596 | orchestrator | 2026-04-10 00:40:53.291607 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-10 00:40:53.291617 | orchestrator | Friday 10 April 2026 00:40:49 +0000 (0:00:00.126) 0:00:10.069 ********** 2026-04-10 00:40:53.291628 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:40:53.291639 | orchestrator | 2026-04-10 00:40:53.291662 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-10 00:40:53.291673 | orchestrator | Friday 10 April 2026 00:40:50 +0000 (0:00:00.131) 0:00:10.201 ********** 2026-04-10 00:40:53.291684 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:40:53.291695 | orchestrator | 2026-04-10 00:40:53.291705 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-10 00:40:53.291716 | orchestrator | Friday 10 April 2026 00:40:50 +0000 (0:00:00.123) 0:00:10.324 ********** 2026-04-10 00:40:53.291727 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:40:53.291738 | orchestrator | 2026-04-10 00:40:53.291749 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-10 00:40:53.291759 | orchestrator | Friday 10 April 2026 00:40:50 +0000 (0:00:00.125) 0:00:10.449 ********** 2026-04-10 00:40:53.291770 | orchestrator | ok: [testbed-node-3] => { 2026-04-10 00:40:53.291781 | orchestrator |  "ceph_osd_devices": { 2026-04-10 00:40:53.291792 | orchestrator |  "sdb": { 2026-04-10 00:40:53.291804 | orchestrator |  "osd_lvm_uuid": "4a24d887-4b45-578e-8445-fe6f68cb2659" 2026-04-10 00:40:53.291815 | orchestrator |  }, 2026-04-10 00:40:53.291826 | orchestrator |  "sdc": { 2026-04-10 00:40:53.291838 | orchestrator |  "osd_lvm_uuid": "83f5954c-7956-54fb-af17-18f84b92edf0" 2026-04-10 00:40:53.291849 | orchestrator |  } 2026-04-10 00:40:53.291860 | orchestrator |  } 2026-04-10 00:40:53.291871 | orchestrator | } 2026-04-10 00:40:53.291882 | orchestrator | 2026-04-10 00:40:53.291892 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-10 00:40:53.291903 | orchestrator | Friday 10 April 2026 00:40:50 +0000 (0:00:00.118) 0:00:10.567 ********** 2026-04-10 00:40:53.291914 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:40:53.291925 | orchestrator | 2026-04-10 00:40:53.291935 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-10 00:40:53.291946 | orchestrator | Friday 10 April 2026 00:40:50 +0000 (0:00:00.133) 0:00:10.701 ********** 2026-04-10 00:40:53.291957 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:40:53.291968 | orchestrator | 2026-04-10 00:40:53.291978 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-10 00:40:53.291989 | orchestrator | Friday 10 April 2026 00:40:50 +0000 (0:00:00.120) 0:00:10.822 ********** 2026-04-10 00:40:53.292000 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:40:53.292011 | orchestrator | 2026-04-10 00:40:53.292021 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-10 00:40:53.292032 | orchestrator | Friday 10 April 2026 00:40:50 +0000 (0:00:00.126) 0:00:10.948 ********** 2026-04-10 00:40:53.292043 | orchestrator | changed: [testbed-node-3] => { 2026-04-10 00:40:53.292054 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-10 00:40:53.292065 | orchestrator |  "ceph_osd_devices": { 2026-04-10 00:40:53.292076 | orchestrator |  "sdb": { 2026-04-10 00:40:53.292086 | orchestrator |  "osd_lvm_uuid": "4a24d887-4b45-578e-8445-fe6f68cb2659" 2026-04-10 00:40:53.292097 | orchestrator |  }, 2026-04-10 00:40:53.292108 | orchestrator |  "sdc": { 2026-04-10 00:40:53.292119 | orchestrator |  "osd_lvm_uuid": "83f5954c-7956-54fb-af17-18f84b92edf0" 2026-04-10 00:40:53.292130 | orchestrator |  } 2026-04-10 00:40:53.292140 | orchestrator |  }, 2026-04-10 00:40:53.292151 | orchestrator |  "lvm_volumes": [ 2026-04-10 00:40:53.292162 | orchestrator |  { 2026-04-10 00:40:53.292173 | orchestrator |  "data": "osd-block-4a24d887-4b45-578e-8445-fe6f68cb2659", 2026-04-10 00:40:53.292184 | orchestrator |  "data_vg": "ceph-4a24d887-4b45-578e-8445-fe6f68cb2659" 2026-04-10 00:40:53.292202 | orchestrator |  }, 2026-04-10 00:40:53.292213 | orchestrator |  { 2026-04-10 00:40:53.292224 | orchestrator |  "data": "osd-block-83f5954c-7956-54fb-af17-18f84b92edf0", 2026-04-10 00:40:53.292235 | orchestrator |  "data_vg": "ceph-83f5954c-7956-54fb-af17-18f84b92edf0" 2026-04-10 00:40:53.292246 | orchestrator |  } 2026-04-10 00:40:53.292256 | orchestrator |  ] 2026-04-10 00:40:53.292267 | orchestrator |  } 2026-04-10 00:40:53.292304 | orchestrator | } 2026-04-10 00:40:53.292318 | orchestrator | 2026-04-10 00:40:53.292328 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-10 00:40:53.292339 | orchestrator | Friday 10 April 2026 00:40:50 +0000 (0:00:00.208) 0:00:11.157 ********** 2026-04-10 00:40:53.292350 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-10 00:40:53.292361 | orchestrator | 2026-04-10 00:40:53.292371 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-10 00:40:53.292382 | orchestrator | 2026-04-10 00:40:53.292392 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-10 00:40:53.292403 | orchestrator | Friday 10 April 2026 00:40:52 +0000 (0:00:01.858) 0:00:13.015 ********** 2026-04-10 00:40:53.292414 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-10 00:40:53.292425 | orchestrator | 2026-04-10 00:40:53.292440 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-10 00:40:53.292451 | orchestrator | Friday 10 April 2026 00:40:53 +0000 (0:00:00.220) 0:00:13.236 ********** 2026-04-10 00:40:53.292462 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:40:53.292473 | orchestrator | 2026-04-10 00:40:53.292491 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:41:00.336517 | orchestrator | Friday 10 April 2026 00:40:53 +0000 (0:00:00.212) 0:00:13.448 ********** 2026-04-10 00:41:00.336610 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-10 00:41:00.336622 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-10 00:41:00.336630 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-10 00:41:00.336638 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-10 00:41:00.336646 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-10 00:41:00.336653 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-10 00:41:00.336661 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-10 00:41:00.336672 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-10 00:41:00.336679 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-10 00:41:00.336687 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-10 00:41:00.336693 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-10 00:41:00.336700 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-10 00:41:00.336706 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-10 00:41:00.336712 | orchestrator | 2026-04-10 00:41:00.336720 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:41:00.336726 | orchestrator | Friday 10 April 2026 00:40:53 +0000 (0:00:00.327) 0:00:13.776 ********** 2026-04-10 00:41:00.336733 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:41:00.336740 | orchestrator | 2026-04-10 00:41:00.336747 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:41:00.336754 | orchestrator | Friday 10 April 2026 00:40:53 +0000 (0:00:00.182) 0:00:13.958 ********** 2026-04-10 00:41:00.336778 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:41:00.336790 | orchestrator | 2026-04-10 00:41:00.336800 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:41:00.336810 | orchestrator | Friday 10 April 2026 00:40:53 +0000 (0:00:00.171) 0:00:14.130 ********** 2026-04-10 00:41:00.336820 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:41:00.336830 | orchestrator | 2026-04-10 00:41:00.336840 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:41:00.336851 | orchestrator | Friday 10 April 2026 00:40:54 +0000 (0:00:00.166) 0:00:14.296 ********** 2026-04-10 00:41:00.336861 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:41:00.336871 | orchestrator | 2026-04-10 00:41:00.336882 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:41:00.336892 | orchestrator | Friday 10 April 2026 00:40:54 +0000 (0:00:00.167) 0:00:14.464 ********** 2026-04-10 00:41:00.336902 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:41:00.336913 | orchestrator | 2026-04-10 00:41:00.336924 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:41:00.336935 | orchestrator | Friday 10 April 2026 00:40:54 +0000 (0:00:00.451) 0:00:14.916 ********** 2026-04-10 00:41:00.336945 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:41:00.336957 | orchestrator | 2026-04-10 00:41:00.336968 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:41:00.336979 | orchestrator | Friday 10 April 2026 00:40:54 +0000 (0:00:00.178) 0:00:15.094 ********** 2026-04-10 00:41:00.336989 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:41:00.336999 | orchestrator | 2026-04-10 00:41:00.337009 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:41:00.337019 | orchestrator | Friday 10 April 2026 00:40:55 +0000 (0:00:00.180) 0:00:15.275 ********** 2026-04-10 00:41:00.337029 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:41:00.337039 | orchestrator | 2026-04-10 00:41:00.337050 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:41:00.337061 | orchestrator | Friday 10 April 2026 00:40:55 +0000 (0:00:00.174) 0:00:15.450 ********** 2026-04-10 00:41:00.337071 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762) 2026-04-10 00:41:00.337083 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762) 2026-04-10 00:41:00.337093 | orchestrator | 2026-04-10 00:41:00.337122 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:41:00.337135 | orchestrator | Friday 10 April 2026 00:40:55 +0000 (0:00:00.373) 0:00:15.823 ********** 2026-04-10 00:41:00.337145 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_abddbba1-0dc8-4b4d-8c33-018af0530e23) 2026-04-10 00:41:00.337156 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_abddbba1-0dc8-4b4d-8c33-018af0530e23) 2026-04-10 00:41:00.337167 | orchestrator | 2026-04-10 00:41:00.337178 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:41:00.337188 | orchestrator | Friday 10 April 2026 00:40:56 +0000 (0:00:00.374) 0:00:16.198 ********** 2026-04-10 00:41:00.337199 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_02e5e60d-aa8c-49f3-b265-76760abc52dd) 2026-04-10 00:41:00.337209 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_02e5e60d-aa8c-49f3-b265-76760abc52dd) 2026-04-10 00:41:00.337220 | orchestrator | 2026-04-10 00:41:00.337231 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:41:00.337260 | orchestrator | Friday 10 April 2026 00:40:56 +0000 (0:00:00.374) 0:00:16.572 ********** 2026-04-10 00:41:00.337271 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_42dd6803-c84e-4757-aa8c-571b5d9cbc16) 2026-04-10 00:41:00.337306 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_42dd6803-c84e-4757-aa8c-571b5d9cbc16) 2026-04-10 00:41:00.337318 | orchestrator | 2026-04-10 00:41:00.337341 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:41:00.337353 | orchestrator | Friday 10 April 2026 00:40:56 +0000 (0:00:00.416) 0:00:16.989 ********** 2026-04-10 00:41:00.337364 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-10 00:41:00.337375 | orchestrator | 2026-04-10 00:41:00.337386 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:41:00.337398 | orchestrator | Friday 10 April 2026 00:40:57 +0000 (0:00:00.329) 0:00:17.318 ********** 2026-04-10 00:41:00.337408 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-10 00:41:00.337420 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-10 00:41:00.337432 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-10 00:41:00.337443 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-10 00:41:00.337454 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-10 00:41:00.337463 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-10 00:41:00.337473 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-10 00:41:00.337483 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-10 00:41:00.337494 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-10 00:41:00.337504 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-10 00:41:00.337514 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-10 00:41:00.337525 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-10 00:41:00.337535 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-10 00:41:00.337546 | orchestrator | 2026-04-10 00:41:00.337557 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:41:00.337568 | orchestrator | Friday 10 April 2026 00:40:57 +0000 (0:00:00.370) 0:00:17.688 ********** 2026-04-10 00:41:00.337578 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:41:00.337588 | orchestrator | 2026-04-10 00:41:00.337601 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:41:00.337612 | orchestrator | Friday 10 April 2026 00:40:57 +0000 (0:00:00.188) 0:00:17.877 ********** 2026-04-10 00:41:00.337623 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:41:00.337635 | orchestrator | 2026-04-10 00:41:00.337646 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:41:00.337656 | orchestrator | Friday 10 April 2026 00:40:58 +0000 (0:00:00.694) 0:00:18.571 ********** 2026-04-10 00:41:00.337666 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:41:00.337676 | orchestrator | 2026-04-10 00:41:00.337687 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:41:00.337698 | orchestrator | Friday 10 April 2026 00:40:58 +0000 (0:00:00.221) 0:00:18.793 ********** 2026-04-10 00:41:00.337708 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:41:00.337718 | orchestrator | 2026-04-10 00:41:00.337728 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:41:00.337738 | orchestrator | Friday 10 April 2026 00:40:58 +0000 (0:00:00.186) 0:00:18.979 ********** 2026-04-10 00:41:00.337748 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:41:00.337759 | orchestrator | 2026-04-10 00:41:00.337769 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:41:00.337780 | orchestrator | Friday 10 April 2026 00:40:59 +0000 (0:00:00.183) 0:00:19.163 ********** 2026-04-10 00:41:00.337791 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:41:00.337814 | orchestrator | 2026-04-10 00:41:00.337834 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:41:00.337845 | orchestrator | Friday 10 April 2026 00:40:59 +0000 (0:00:00.215) 0:00:19.378 ********** 2026-04-10 00:41:00.337856 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:41:00.337867 | orchestrator | 2026-04-10 00:41:00.337878 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:41:00.337890 | orchestrator | Friday 10 April 2026 00:40:59 +0000 (0:00:00.200) 0:00:19.579 ********** 2026-04-10 00:41:00.337901 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:41:00.337911 | orchestrator | 2026-04-10 00:41:00.337921 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:41:00.337931 | orchestrator | Friday 10 April 2026 00:40:59 +0000 (0:00:00.225) 0:00:19.804 ********** 2026-04-10 00:41:00.337942 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-10 00:41:00.337953 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-10 00:41:00.337964 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-10 00:41:00.337975 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-10 00:41:00.337985 | orchestrator | 2026-04-10 00:41:00.337995 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:41:00.338006 | orchestrator | Friday 10 April 2026 00:41:00 +0000 (0:00:00.578) 0:00:20.383 ********** 2026-04-10 00:41:00.338072 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:41:05.423890 | orchestrator | 2026-04-10 00:41:05.424014 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:41:05.424051 | orchestrator | Friday 10 April 2026 00:41:00 +0000 (0:00:00.194) 0:00:20.578 ********** 2026-04-10 00:41:05.424069 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:41:05.424080 | orchestrator | 2026-04-10 00:41:05.424101 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:41:05.424110 | orchestrator | Friday 10 April 2026 00:41:00 +0000 (0:00:00.184) 0:00:20.762 ********** 2026-04-10 00:41:05.424119 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:41:05.424128 | orchestrator | 2026-04-10 00:41:05.424137 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:41:05.424146 | orchestrator | Friday 10 April 2026 00:41:00 +0000 (0:00:00.157) 0:00:20.920 ********** 2026-04-10 00:41:05.424155 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:41:05.424163 | orchestrator | 2026-04-10 00:41:05.424172 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-10 00:41:05.424181 | orchestrator | Friday 10 April 2026 00:41:00 +0000 (0:00:00.180) 0:00:21.101 ********** 2026-04-10 00:41:05.424190 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-04-10 00:41:05.424199 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-04-10 00:41:05.424211 | orchestrator | 2026-04-10 00:41:05.424228 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-10 00:41:05.424243 | orchestrator | Friday 10 April 2026 00:41:01 +0000 (0:00:00.320) 0:00:21.421 ********** 2026-04-10 00:41:05.424256 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:41:05.424277 | orchestrator | 2026-04-10 00:41:05.424321 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-10 00:41:05.424335 | orchestrator | Friday 10 April 2026 00:41:01 +0000 (0:00:00.096) 0:00:21.518 ********** 2026-04-10 00:41:05.424348 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:41:05.424362 | orchestrator | 2026-04-10 00:41:05.424376 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-10 00:41:05.424391 | orchestrator | Friday 10 April 2026 00:41:01 +0000 (0:00:00.107) 0:00:21.626 ********** 2026-04-10 00:41:05.424403 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:41:05.424416 | orchestrator | 2026-04-10 00:41:05.424431 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-10 00:41:05.424445 | orchestrator | Friday 10 April 2026 00:41:01 +0000 (0:00:00.107) 0:00:21.733 ********** 2026-04-10 00:41:05.424489 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:41:05.424506 | orchestrator | 2026-04-10 00:41:05.424523 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-10 00:41:05.424539 | orchestrator | Friday 10 April 2026 00:41:01 +0000 (0:00:00.113) 0:00:21.847 ********** 2026-04-10 00:41:05.424554 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '465b2d07-90ab-575b-b156-9a24eede9b64'}}) 2026-04-10 00:41:05.424570 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a684d377-5ec1-594b-83a4-e92528b1ce81'}}) 2026-04-10 00:41:05.424585 | orchestrator | 2026-04-10 00:41:05.424597 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-10 00:41:05.424608 | orchestrator | Friday 10 April 2026 00:41:01 +0000 (0:00:00.151) 0:00:21.999 ********** 2026-04-10 00:41:05.424619 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '465b2d07-90ab-575b-b156-9a24eede9b64'}})  2026-04-10 00:41:05.424641 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a684d377-5ec1-594b-83a4-e92528b1ce81'}})  2026-04-10 00:41:05.424652 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:41:05.424663 | orchestrator | 2026-04-10 00:41:05.424673 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-10 00:41:05.424683 | orchestrator | Friday 10 April 2026 00:41:01 +0000 (0:00:00.129) 0:00:22.128 ********** 2026-04-10 00:41:05.424693 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '465b2d07-90ab-575b-b156-9a24eede9b64'}})  2026-04-10 00:41:05.424703 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a684d377-5ec1-594b-83a4-e92528b1ce81'}})  2026-04-10 00:41:05.424714 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:41:05.424724 | orchestrator | 2026-04-10 00:41:05.424734 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-10 00:41:05.424744 | orchestrator | Friday 10 April 2026 00:41:02 +0000 (0:00:00.124) 0:00:22.252 ********** 2026-04-10 00:41:05.424754 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '465b2d07-90ab-575b-b156-9a24eede9b64'}})  2026-04-10 00:41:05.424763 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a684d377-5ec1-594b-83a4-e92528b1ce81'}})  2026-04-10 00:41:05.424773 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:41:05.424783 | orchestrator | 2026-04-10 00:41:05.424812 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-10 00:41:05.424821 | orchestrator | Friday 10 April 2026 00:41:02 +0000 (0:00:00.114) 0:00:22.367 ********** 2026-04-10 00:41:05.424830 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:41:05.424839 | orchestrator | 2026-04-10 00:41:05.424848 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-10 00:41:05.424857 | orchestrator | Friday 10 April 2026 00:41:02 +0000 (0:00:00.100) 0:00:22.467 ********** 2026-04-10 00:41:05.424865 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:41:05.424874 | orchestrator | 2026-04-10 00:41:05.424883 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-10 00:41:05.424892 | orchestrator | Friday 10 April 2026 00:41:02 +0000 (0:00:00.093) 0:00:22.561 ********** 2026-04-10 00:41:05.424920 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:41:05.424929 | orchestrator | 2026-04-10 00:41:05.424938 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-10 00:41:05.424947 | orchestrator | Friday 10 April 2026 00:41:02 +0000 (0:00:00.087) 0:00:22.648 ********** 2026-04-10 00:41:05.424956 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:41:05.424964 | orchestrator | 2026-04-10 00:41:05.424973 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-10 00:41:05.424982 | orchestrator | Friday 10 April 2026 00:41:02 +0000 (0:00:00.227) 0:00:22.875 ********** 2026-04-10 00:41:05.424991 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:41:05.425007 | orchestrator | 2026-04-10 00:41:05.425016 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-10 00:41:05.425024 | orchestrator | Friday 10 April 2026 00:41:02 +0000 (0:00:00.094) 0:00:22.969 ********** 2026-04-10 00:41:05.425033 | orchestrator | ok: [testbed-node-4] => { 2026-04-10 00:41:05.425042 | orchestrator |  "ceph_osd_devices": { 2026-04-10 00:41:05.425051 | orchestrator |  "sdb": { 2026-04-10 00:41:05.425060 | orchestrator |  "osd_lvm_uuid": "465b2d07-90ab-575b-b156-9a24eede9b64" 2026-04-10 00:41:05.425070 | orchestrator |  }, 2026-04-10 00:41:05.425079 | orchestrator |  "sdc": { 2026-04-10 00:41:05.425088 | orchestrator |  "osd_lvm_uuid": "a684d377-5ec1-594b-83a4-e92528b1ce81" 2026-04-10 00:41:05.425100 | orchestrator |  } 2026-04-10 00:41:05.425115 | orchestrator |  } 2026-04-10 00:41:05.425126 | orchestrator | } 2026-04-10 00:41:05.425134 | orchestrator | 2026-04-10 00:41:05.425143 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-10 00:41:05.425152 | orchestrator | Friday 10 April 2026 00:41:02 +0000 (0:00:00.099) 0:00:23.069 ********** 2026-04-10 00:41:05.425161 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:41:05.425170 | orchestrator | 2026-04-10 00:41:05.425178 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-10 00:41:05.425187 | orchestrator | Friday 10 April 2026 00:41:02 +0000 (0:00:00.092) 0:00:23.162 ********** 2026-04-10 00:41:05.425196 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:41:05.425205 | orchestrator | 2026-04-10 00:41:05.425213 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-10 00:41:05.425222 | orchestrator | Friday 10 April 2026 00:41:03 +0000 (0:00:00.097) 0:00:23.259 ********** 2026-04-10 00:41:05.425244 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:41:05.425253 | orchestrator | 2026-04-10 00:41:05.425262 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-10 00:41:05.425271 | orchestrator | Friday 10 April 2026 00:41:03 +0000 (0:00:00.107) 0:00:23.367 ********** 2026-04-10 00:41:05.425280 | orchestrator | changed: [testbed-node-4] => { 2026-04-10 00:41:05.425318 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-10 00:41:05.425333 | orchestrator |  "ceph_osd_devices": { 2026-04-10 00:41:05.425343 | orchestrator |  "sdb": { 2026-04-10 00:41:05.425352 | orchestrator |  "osd_lvm_uuid": "465b2d07-90ab-575b-b156-9a24eede9b64" 2026-04-10 00:41:05.425360 | orchestrator |  }, 2026-04-10 00:41:05.425369 | orchestrator |  "sdc": { 2026-04-10 00:41:05.425389 | orchestrator |  "osd_lvm_uuid": "a684d377-5ec1-594b-83a4-e92528b1ce81" 2026-04-10 00:41:05.425398 | orchestrator |  } 2026-04-10 00:41:05.425407 | orchestrator |  }, 2026-04-10 00:41:05.425416 | orchestrator |  "lvm_volumes": [ 2026-04-10 00:41:05.425425 | orchestrator |  { 2026-04-10 00:41:05.425434 | orchestrator |  "data": "osd-block-465b2d07-90ab-575b-b156-9a24eede9b64", 2026-04-10 00:41:05.425443 | orchestrator |  "data_vg": "ceph-465b2d07-90ab-575b-b156-9a24eede9b64" 2026-04-10 00:41:05.425451 | orchestrator |  }, 2026-04-10 00:41:05.425460 | orchestrator |  { 2026-04-10 00:41:05.425469 | orchestrator |  "data": "osd-block-a684d377-5ec1-594b-83a4-e92528b1ce81", 2026-04-10 00:41:05.425477 | orchestrator |  "data_vg": "ceph-a684d377-5ec1-594b-83a4-e92528b1ce81" 2026-04-10 00:41:05.425486 | orchestrator |  } 2026-04-10 00:41:05.425494 | orchestrator |  ] 2026-04-10 00:41:05.425503 | orchestrator |  } 2026-04-10 00:41:05.425512 | orchestrator | } 2026-04-10 00:41:05.425520 | orchestrator | 2026-04-10 00:41:05.425529 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-10 00:41:05.425537 | orchestrator | Friday 10 April 2026 00:41:03 +0000 (0:00:00.173) 0:00:23.541 ********** 2026-04-10 00:41:05.425546 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-10 00:41:05.425554 | orchestrator | 2026-04-10 00:41:05.425569 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-10 00:41:05.425578 | orchestrator | 2026-04-10 00:41:05.425587 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-10 00:41:05.425595 | orchestrator | Friday 10 April 2026 00:41:04 +0000 (0:00:00.942) 0:00:24.483 ********** 2026-04-10 00:41:05.425604 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-10 00:41:05.425613 | orchestrator | 2026-04-10 00:41:05.425621 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-10 00:41:05.425630 | orchestrator | Friday 10 April 2026 00:41:04 +0000 (0:00:00.351) 0:00:24.834 ********** 2026-04-10 00:41:05.425638 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:41:05.425647 | orchestrator | 2026-04-10 00:41:05.425656 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:41:05.425664 | orchestrator | Friday 10 April 2026 00:41:05 +0000 (0:00:00.467) 0:00:25.302 ********** 2026-04-10 00:41:05.425673 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-10 00:41:05.425681 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-10 00:41:05.425690 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-10 00:41:05.425698 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-10 00:41:05.425707 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-10 00:41:05.425722 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-10 00:41:13.393662 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-10 00:41:13.393794 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-10 00:41:13.393819 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-10 00:41:13.393839 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-10 00:41:13.393883 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-10 00:41:13.393905 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-10 00:41:13.393923 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-10 00:41:13.393942 | orchestrator | 2026-04-10 00:41:13.393962 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:41:13.393984 | orchestrator | Friday 10 April 2026 00:41:05 +0000 (0:00:00.366) 0:00:25.669 ********** 2026-04-10 00:41:13.394086 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:41:13.394112 | orchestrator | 2026-04-10 00:41:13.394131 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:41:13.394149 | orchestrator | Friday 10 April 2026 00:41:05 +0000 (0:00:00.222) 0:00:25.892 ********** 2026-04-10 00:41:13.394168 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:41:13.394188 | orchestrator | 2026-04-10 00:41:13.394208 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:41:13.394228 | orchestrator | Friday 10 April 2026 00:41:05 +0000 (0:00:00.222) 0:00:26.115 ********** 2026-04-10 00:41:13.394246 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:41:13.394266 | orchestrator | 2026-04-10 00:41:13.394313 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:41:13.394334 | orchestrator | Friday 10 April 2026 00:41:06 +0000 (0:00:00.206) 0:00:26.321 ********** 2026-04-10 00:41:13.394362 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:41:13.394382 | orchestrator | 2026-04-10 00:41:13.394401 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:41:13.394421 | orchestrator | Friday 10 April 2026 00:41:06 +0000 (0:00:00.200) 0:00:26.522 ********** 2026-04-10 00:41:13.394471 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:41:13.394491 | orchestrator | 2026-04-10 00:41:13.394510 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:41:13.394528 | orchestrator | Friday 10 April 2026 00:41:06 +0000 (0:00:00.205) 0:00:26.728 ********** 2026-04-10 00:41:13.394546 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:41:13.394565 | orchestrator | 2026-04-10 00:41:13.394583 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:41:13.394599 | orchestrator | Friday 10 April 2026 00:41:06 +0000 (0:00:00.204) 0:00:26.932 ********** 2026-04-10 00:41:13.394615 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:41:13.394632 | orchestrator | 2026-04-10 00:41:13.394651 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:41:13.394668 | orchestrator | Friday 10 April 2026 00:41:06 +0000 (0:00:00.189) 0:00:27.122 ********** 2026-04-10 00:41:13.394686 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:41:13.394705 | orchestrator | 2026-04-10 00:41:13.394723 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:41:13.394742 | orchestrator | Friday 10 April 2026 00:41:07 +0000 (0:00:00.175) 0:00:27.297 ********** 2026-04-10 00:41:13.394761 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21) 2026-04-10 00:41:13.394780 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21) 2026-04-10 00:41:13.394799 | orchestrator | 2026-04-10 00:41:13.394817 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:41:13.394837 | orchestrator | Friday 10 April 2026 00:41:07 +0000 (0:00:00.527) 0:00:27.825 ********** 2026-04-10 00:41:13.394855 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a4e1216f-fa74-4126-b451-31b29817bdec) 2026-04-10 00:41:13.394874 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a4e1216f-fa74-4126-b451-31b29817bdec) 2026-04-10 00:41:13.394893 | orchestrator | 2026-04-10 00:41:13.394911 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:41:13.394930 | orchestrator | Friday 10 April 2026 00:41:08 +0000 (0:00:00.668) 0:00:28.493 ********** 2026-04-10 00:41:13.394949 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9b5f2139-44b1-4420-a83a-35d7b8e164cf) 2026-04-10 00:41:13.394968 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9b5f2139-44b1-4420-a83a-35d7b8e164cf) 2026-04-10 00:41:13.394987 | orchestrator | 2026-04-10 00:41:13.395003 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:41:13.395019 | orchestrator | Friday 10 April 2026 00:41:08 +0000 (0:00:00.372) 0:00:28.866 ********** 2026-04-10 00:41:13.395036 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_433cfae2-239d-480b-959d-b8cd36270ab8) 2026-04-10 00:41:13.395053 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_433cfae2-239d-480b-959d-b8cd36270ab8) 2026-04-10 00:41:13.395070 | orchestrator | 2026-04-10 00:41:13.395086 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:41:13.395102 | orchestrator | Friday 10 April 2026 00:41:09 +0000 (0:00:00.454) 0:00:29.320 ********** 2026-04-10 00:41:13.395118 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-10 00:41:13.395134 | orchestrator | 2026-04-10 00:41:13.395151 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:41:13.395193 | orchestrator | Friday 10 April 2026 00:41:09 +0000 (0:00:00.369) 0:00:29.690 ********** 2026-04-10 00:41:13.395209 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-10 00:41:13.395226 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-10 00:41:13.395243 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-10 00:41:13.395260 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-10 00:41:13.395324 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-10 00:41:13.395344 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-10 00:41:13.395361 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-10 00:41:13.395377 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-10 00:41:13.395393 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-10 00:41:13.395409 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-10 00:41:13.395426 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-10 00:41:13.395442 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-10 00:41:13.395457 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-10 00:41:13.395474 | orchestrator | 2026-04-10 00:41:13.395491 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:41:13.395507 | orchestrator | Friday 10 April 2026 00:41:09 +0000 (0:00:00.395) 0:00:30.086 ********** 2026-04-10 00:41:13.395525 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:41:13.395542 | orchestrator | 2026-04-10 00:41:13.395559 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:41:13.395575 | orchestrator | Friday 10 April 2026 00:41:10 +0000 (0:00:00.210) 0:00:30.296 ********** 2026-04-10 00:41:13.395592 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:41:13.395609 | orchestrator | 2026-04-10 00:41:13.395626 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:41:13.395643 | orchestrator | Friday 10 April 2026 00:41:10 +0000 (0:00:00.185) 0:00:30.482 ********** 2026-04-10 00:41:13.395659 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:41:13.395675 | orchestrator | 2026-04-10 00:41:13.395691 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:41:13.395718 | orchestrator | Friday 10 April 2026 00:41:10 +0000 (0:00:00.211) 0:00:30.694 ********** 2026-04-10 00:41:13.395736 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:41:13.395752 | orchestrator | 2026-04-10 00:41:13.395768 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:41:13.395784 | orchestrator | Friday 10 April 2026 00:41:10 +0000 (0:00:00.187) 0:00:30.881 ********** 2026-04-10 00:41:13.395800 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:41:13.395816 | orchestrator | 2026-04-10 00:41:13.395833 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:41:13.395850 | orchestrator | Friday 10 April 2026 00:41:10 +0000 (0:00:00.184) 0:00:31.066 ********** 2026-04-10 00:41:13.395867 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:41:13.395883 | orchestrator | 2026-04-10 00:41:13.395899 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:41:13.395915 | orchestrator | Friday 10 April 2026 00:41:11 +0000 (0:00:00.682) 0:00:31.748 ********** 2026-04-10 00:41:13.395931 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:41:13.395946 | orchestrator | 2026-04-10 00:41:13.395962 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:41:13.395978 | orchestrator | Friday 10 April 2026 00:41:11 +0000 (0:00:00.194) 0:00:31.943 ********** 2026-04-10 00:41:13.395995 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:41:13.396012 | orchestrator | 2026-04-10 00:41:13.396029 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:41:13.396045 | orchestrator | Friday 10 April 2026 00:41:11 +0000 (0:00:00.200) 0:00:32.143 ********** 2026-04-10 00:41:13.396061 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-10 00:41:13.396088 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-10 00:41:13.396105 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-10 00:41:13.396120 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-10 00:41:13.396136 | orchestrator | 2026-04-10 00:41:13.396152 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:41:13.396168 | orchestrator | Friday 10 April 2026 00:41:12 +0000 (0:00:00.750) 0:00:32.894 ********** 2026-04-10 00:41:13.396185 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:41:13.396201 | orchestrator | 2026-04-10 00:41:13.396217 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:41:13.396233 | orchestrator | Friday 10 April 2026 00:41:12 +0000 (0:00:00.168) 0:00:33.062 ********** 2026-04-10 00:41:13.396249 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:41:13.396265 | orchestrator | 2026-04-10 00:41:13.396343 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:41:13.396364 | orchestrator | Friday 10 April 2026 00:41:13 +0000 (0:00:00.168) 0:00:33.231 ********** 2026-04-10 00:41:13.396381 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:41:13.396396 | orchestrator | 2026-04-10 00:41:13.396413 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:41:13.396423 | orchestrator | Friday 10 April 2026 00:41:13 +0000 (0:00:00.163) 0:00:33.395 ********** 2026-04-10 00:41:13.396433 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:41:13.396442 | orchestrator | 2026-04-10 00:41:13.396462 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-10 00:41:16.808107 | orchestrator | Friday 10 April 2026 00:41:13 +0000 (0:00:00.155) 0:00:33.550 ********** 2026-04-10 00:41:16.808223 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-04-10 00:41:16.808239 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-04-10 00:41:16.808251 | orchestrator | 2026-04-10 00:41:16.808264 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-10 00:41:16.808276 | orchestrator | Friday 10 April 2026 00:41:13 +0000 (0:00:00.139) 0:00:33.690 ********** 2026-04-10 00:41:16.808334 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:41:16.808348 | orchestrator | 2026-04-10 00:41:16.808360 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-10 00:41:16.808371 | orchestrator | Friday 10 April 2026 00:41:13 +0000 (0:00:00.103) 0:00:33.793 ********** 2026-04-10 00:41:16.808382 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:41:16.808393 | orchestrator | 2026-04-10 00:41:16.808405 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-10 00:41:16.808416 | orchestrator | Friday 10 April 2026 00:41:13 +0000 (0:00:00.115) 0:00:33.909 ********** 2026-04-10 00:41:16.808427 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:41:16.808438 | orchestrator | 2026-04-10 00:41:16.808450 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-10 00:41:16.808462 | orchestrator | Friday 10 April 2026 00:41:13 +0000 (0:00:00.150) 0:00:34.059 ********** 2026-04-10 00:41:16.808473 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:41:16.808485 | orchestrator | 2026-04-10 00:41:16.808496 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-10 00:41:16.808507 | orchestrator | Friday 10 April 2026 00:41:14 +0000 (0:00:00.291) 0:00:34.351 ********** 2026-04-10 00:41:16.808519 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '09201c46-e11a-5302-956e-912d17e7f9de'}}) 2026-04-10 00:41:16.808531 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0863171e-1302-565f-bee5-d18b6804a785'}}) 2026-04-10 00:41:16.808541 | orchestrator | 2026-04-10 00:41:16.808552 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-10 00:41:16.808564 | orchestrator | Friday 10 April 2026 00:41:14 +0000 (0:00:00.176) 0:00:34.528 ********** 2026-04-10 00:41:16.808576 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '09201c46-e11a-5302-956e-912d17e7f9de'}})  2026-04-10 00:41:16.808614 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0863171e-1302-565f-bee5-d18b6804a785'}})  2026-04-10 00:41:16.808628 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:41:16.808641 | orchestrator | 2026-04-10 00:41:16.808654 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-10 00:41:16.808667 | orchestrator | Friday 10 April 2026 00:41:14 +0000 (0:00:00.142) 0:00:34.670 ********** 2026-04-10 00:41:16.808678 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '09201c46-e11a-5302-956e-912d17e7f9de'}})  2026-04-10 00:41:16.808689 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0863171e-1302-565f-bee5-d18b6804a785'}})  2026-04-10 00:41:16.808700 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:41:16.808711 | orchestrator | 2026-04-10 00:41:16.808722 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-10 00:41:16.808733 | orchestrator | Friday 10 April 2026 00:41:14 +0000 (0:00:00.150) 0:00:34.821 ********** 2026-04-10 00:41:16.808744 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '09201c46-e11a-5302-956e-912d17e7f9de'}})  2026-04-10 00:41:16.808755 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0863171e-1302-565f-bee5-d18b6804a785'}})  2026-04-10 00:41:16.808765 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:41:16.808776 | orchestrator | 2026-04-10 00:41:16.808787 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-10 00:41:16.808798 | orchestrator | Friday 10 April 2026 00:41:14 +0000 (0:00:00.108) 0:00:34.929 ********** 2026-04-10 00:41:16.808809 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:41:16.808819 | orchestrator | 2026-04-10 00:41:16.808830 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-10 00:41:16.808841 | orchestrator | Friday 10 April 2026 00:41:14 +0000 (0:00:00.098) 0:00:35.027 ********** 2026-04-10 00:41:16.808852 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:41:16.808862 | orchestrator | 2026-04-10 00:41:16.808873 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-10 00:41:16.808884 | orchestrator | Friday 10 April 2026 00:41:14 +0000 (0:00:00.092) 0:00:35.120 ********** 2026-04-10 00:41:16.808894 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:41:16.808905 | orchestrator | 2026-04-10 00:41:16.808916 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-10 00:41:16.808927 | orchestrator | Friday 10 April 2026 00:41:15 +0000 (0:00:00.090) 0:00:35.211 ********** 2026-04-10 00:41:16.808938 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:41:16.808948 | orchestrator | 2026-04-10 00:41:16.808959 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-10 00:41:16.808970 | orchestrator | Friday 10 April 2026 00:41:15 +0000 (0:00:00.082) 0:00:35.294 ********** 2026-04-10 00:41:16.808980 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:41:16.808991 | orchestrator | 2026-04-10 00:41:16.809002 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-10 00:41:16.809013 | orchestrator | Friday 10 April 2026 00:41:15 +0000 (0:00:00.092) 0:00:35.387 ********** 2026-04-10 00:41:16.809024 | orchestrator | ok: [testbed-node-5] => { 2026-04-10 00:41:16.809034 | orchestrator |  "ceph_osd_devices": { 2026-04-10 00:41:16.809046 | orchestrator |  "sdb": { 2026-04-10 00:41:16.809079 | orchestrator |  "osd_lvm_uuid": "09201c46-e11a-5302-956e-912d17e7f9de" 2026-04-10 00:41:16.809091 | orchestrator |  }, 2026-04-10 00:41:16.809103 | orchestrator |  "sdc": { 2026-04-10 00:41:16.809132 | orchestrator |  "osd_lvm_uuid": "0863171e-1302-565f-bee5-d18b6804a785" 2026-04-10 00:41:16.809144 | orchestrator |  } 2026-04-10 00:41:16.809155 | orchestrator |  } 2026-04-10 00:41:16.809166 | orchestrator | } 2026-04-10 00:41:16.809177 | orchestrator | 2026-04-10 00:41:16.809199 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-10 00:41:16.809211 | orchestrator | Friday 10 April 2026 00:41:15 +0000 (0:00:00.107) 0:00:35.494 ********** 2026-04-10 00:41:16.809222 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:41:16.809233 | orchestrator | 2026-04-10 00:41:16.809244 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-10 00:41:16.809254 | orchestrator | Friday 10 April 2026 00:41:15 +0000 (0:00:00.118) 0:00:35.613 ********** 2026-04-10 00:41:16.809265 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:41:16.809276 | orchestrator | 2026-04-10 00:41:16.809312 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-10 00:41:16.809324 | orchestrator | Friday 10 April 2026 00:41:15 +0000 (0:00:00.253) 0:00:35.866 ********** 2026-04-10 00:41:16.809335 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:41:16.809346 | orchestrator | 2026-04-10 00:41:16.809356 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-10 00:41:16.809367 | orchestrator | Friday 10 April 2026 00:41:15 +0000 (0:00:00.098) 0:00:35.964 ********** 2026-04-10 00:41:16.809378 | orchestrator | changed: [testbed-node-5] => { 2026-04-10 00:41:16.809389 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-10 00:41:16.809400 | orchestrator |  "ceph_osd_devices": { 2026-04-10 00:41:16.809411 | orchestrator |  "sdb": { 2026-04-10 00:41:16.809422 | orchestrator |  "osd_lvm_uuid": "09201c46-e11a-5302-956e-912d17e7f9de" 2026-04-10 00:41:16.809433 | orchestrator |  }, 2026-04-10 00:41:16.809444 | orchestrator |  "sdc": { 2026-04-10 00:41:16.809460 | orchestrator |  "osd_lvm_uuid": "0863171e-1302-565f-bee5-d18b6804a785" 2026-04-10 00:41:16.809471 | orchestrator |  } 2026-04-10 00:41:16.809481 | orchestrator |  }, 2026-04-10 00:41:16.809492 | orchestrator |  "lvm_volumes": [ 2026-04-10 00:41:16.809503 | orchestrator |  { 2026-04-10 00:41:16.809515 | orchestrator |  "data": "osd-block-09201c46-e11a-5302-956e-912d17e7f9de", 2026-04-10 00:41:16.809526 | orchestrator |  "data_vg": "ceph-09201c46-e11a-5302-956e-912d17e7f9de" 2026-04-10 00:41:16.809537 | orchestrator |  }, 2026-04-10 00:41:16.809552 | orchestrator |  { 2026-04-10 00:41:16.809563 | orchestrator |  "data": "osd-block-0863171e-1302-565f-bee5-d18b6804a785", 2026-04-10 00:41:16.809574 | orchestrator |  "data_vg": "ceph-0863171e-1302-565f-bee5-d18b6804a785" 2026-04-10 00:41:16.809585 | orchestrator |  } 2026-04-10 00:41:16.809596 | orchestrator |  ] 2026-04-10 00:41:16.809607 | orchestrator |  } 2026-04-10 00:41:16.809618 | orchestrator | } 2026-04-10 00:41:16.809629 | orchestrator | 2026-04-10 00:41:16.809640 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-10 00:41:16.809650 | orchestrator | Friday 10 April 2026 00:41:15 +0000 (0:00:00.187) 0:00:36.152 ********** 2026-04-10 00:41:16.809661 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-10 00:41:16.809672 | orchestrator | 2026-04-10 00:41:16.809683 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:41:16.809694 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-10 00:41:16.809706 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-10 00:41:16.809717 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-10 00:41:16.809728 | orchestrator | 2026-04-10 00:41:16.809739 | orchestrator | 2026-04-10 00:41:16.809749 | orchestrator | 2026-04-10 00:41:16.809760 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:41:16.809771 | orchestrator | Friday 10 April 2026 00:41:16 +0000 (0:00:00.800) 0:00:36.953 ********** 2026-04-10 00:41:16.809789 | orchestrator | =============================================================================== 2026-04-10 00:41:16.809800 | orchestrator | Write configuration file ------------------------------------------------ 3.60s 2026-04-10 00:41:16.809811 | orchestrator | Add known partitions to the list of available block devices ------------- 1.09s 2026-04-10 00:41:16.809822 | orchestrator | Add known links to the list of available block devices ------------------ 1.02s 2026-04-10 00:41:16.809832 | orchestrator | Add known partitions to the list of available block devices ------------- 0.92s 2026-04-10 00:41:16.809843 | orchestrator | Get initial list of available block devices ----------------------------- 0.87s 2026-04-10 00:41:16.809854 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.78s 2026-04-10 00:41:16.809865 | orchestrator | Add known partitions to the list of available block devices ------------- 0.75s 2026-04-10 00:41:16.809876 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2026-04-10 00:41:16.809887 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2026-04-10 00:41:16.809898 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2026-04-10 00:41:16.809909 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.66s 2026-04-10 00:41:16.809920 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.59s 2026-04-10 00:41:16.809931 | orchestrator | Add known partitions to the list of available block devices ------------- 0.58s 2026-04-10 00:41:16.809949 | orchestrator | Print configuration data ------------------------------------------------ 0.57s 2026-04-10 00:41:17.042560 | orchestrator | Add known links to the list of available block devices ------------------ 0.54s 2026-04-10 00:41:17.042660 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.54s 2026-04-10 00:41:17.042676 | orchestrator | Add known links to the list of available block devices ------------------ 0.53s 2026-04-10 00:41:17.042685 | orchestrator | Add known links to the list of available block devices ------------------ 0.51s 2026-04-10 00:41:17.042693 | orchestrator | Add known links to the list of available block devices ------------------ 0.50s 2026-04-10 00:41:17.042701 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.48s 2026-04-10 00:41:38.489148 | orchestrator | 2026-04-10 00:41:38 | INFO  | Task 4686df05-57f4-4e2c-9b0c-2dd18bb97434 (sync inventory) is running in background. Output coming soon. 2026-04-10 00:42:04.886363 | orchestrator | 2026-04-10 00:41:39 | INFO  | Starting group_vars file reorganization 2026-04-10 00:42:04.886463 | orchestrator | 2026-04-10 00:41:39 | INFO  | Moved 0 file(s) to their respective directories 2026-04-10 00:42:04.886475 | orchestrator | 2026-04-10 00:41:39 | INFO  | Group_vars file reorganization completed 2026-04-10 00:42:04.886484 | orchestrator | 2026-04-10 00:41:42 | INFO  | Starting variable preparation from inventory 2026-04-10 00:42:04.886492 | orchestrator | 2026-04-10 00:41:44 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-04-10 00:42:04.886501 | orchestrator | 2026-04-10 00:41:44 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-04-10 00:42:04.886508 | orchestrator | 2026-04-10 00:41:44 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-04-10 00:42:04.886516 | orchestrator | 2026-04-10 00:41:44 | INFO  | 3 file(s) written, 6 host(s) processed 2026-04-10 00:42:04.886524 | orchestrator | 2026-04-10 00:41:44 | INFO  | Variable preparation completed 2026-04-10 00:42:04.886532 | orchestrator | 2026-04-10 00:41:45 | INFO  | Starting inventory overwrite handling 2026-04-10 00:42:04.886540 | orchestrator | 2026-04-10 00:41:45 | INFO  | Handling group overwrites in 99-overwrite 2026-04-10 00:42:04.886547 | orchestrator | 2026-04-10 00:41:45 | INFO  | Removing group frr:children from 60-generic 2026-04-10 00:42:04.886581 | orchestrator | 2026-04-10 00:41:45 | INFO  | Removing group netbird:children from 50-infrastructure 2026-04-10 00:42:04.886589 | orchestrator | 2026-04-10 00:41:45 | INFO  | Removing group ceph-mds from 50-ceph 2026-04-10 00:42:04.886596 | orchestrator | 2026-04-10 00:41:45 | INFO  | Removing group ceph-rgw from 50-ceph 2026-04-10 00:42:04.886603 | orchestrator | 2026-04-10 00:41:45 | INFO  | Handling group overwrites in 20-roles 2026-04-10 00:42:04.886611 | orchestrator | 2026-04-10 00:41:45 | INFO  | Removing group k3s_node from 50-infrastructure 2026-04-10 00:42:04.886618 | orchestrator | 2026-04-10 00:41:45 | INFO  | Removed 5 group(s) in total 2026-04-10 00:42:04.886626 | orchestrator | 2026-04-10 00:41:45 | INFO  | Inventory overwrite handling completed 2026-04-10 00:42:04.886633 | orchestrator | 2026-04-10 00:41:47 | INFO  | Starting merge of inventory files 2026-04-10 00:42:04.886641 | orchestrator | 2026-04-10 00:41:47 | INFO  | Inventory files merged successfully 2026-04-10 00:42:04.886648 | orchestrator | 2026-04-10 00:41:51 | INFO  | Generating minified hosts file 2026-04-10 00:42:04.886657 | orchestrator | 2026-04-10 00:41:52 | INFO  | Successfully wrote minified hosts file to /inventory.merge/hosts-minified.yml 2026-04-10 00:42:04.886665 | orchestrator | 2026-04-10 00:41:52 | INFO  | Successfully wrote fast inventory to /inventory.merge/fast/hosts.json 2026-04-10 00:42:04.886689 | orchestrator | 2026-04-10 00:41:53 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-04-10 00:42:04.886697 | orchestrator | 2026-04-10 00:42:03 | INFO  | Successfully wrote ClusterShell configuration 2026-04-10 00:42:04.886704 | orchestrator | [master bc97c4e] 2026-04-10-00-42 2026-04-10 00:42:04.886713 | orchestrator | 5 files changed, 75 insertions(+), 10 deletions(-) 2026-04-10 00:42:04.886722 | orchestrator | create mode 100644 fast/host_vars/testbed-node-3/ceph-lvm-configuration.yml 2026-04-10 00:42:04.886729 | orchestrator | create mode 100644 fast/host_vars/testbed-node-4/ceph-lvm-configuration.yml 2026-04-10 00:42:04.886736 | orchestrator | create mode 100644 fast/host_vars/testbed-node-5/ceph-lvm-configuration.yml 2026-04-10 00:42:06.084216 | orchestrator | 2026-04-10 00:42:06 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-04-10 00:42:06.136970 | orchestrator | 2026-04-10 00:42:06 | INFO  | Task 7d373333-fe72-4b55-b10d-39d9faffa732 (ceph-create-lvm-devices) was prepared for execution. 2026-04-10 00:42:06.137107 | orchestrator | 2026-04-10 00:42:06 | INFO  | It takes a moment until task 7d373333-fe72-4b55-b10d-39d9faffa732 (ceph-create-lvm-devices) has been started and output is visible here. 2026-04-10 00:42:16.370920 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-10 00:42:16.371012 | orchestrator | 2.16.14 2026-04-10 00:42:16.371035 | orchestrator | 2026-04-10 00:42:16.371051 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-10 00:42:16.371067 | orchestrator | 2026-04-10 00:42:16.371081 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-10 00:42:16.371093 | orchestrator | Friday 10 April 2026 00:42:10 +0000 (0:00:00.245) 0:00:00.245 ********** 2026-04-10 00:42:16.371106 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-10 00:42:16.371119 | orchestrator | 2026-04-10 00:42:16.371131 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-10 00:42:16.371145 | orchestrator | Friday 10 April 2026 00:42:10 +0000 (0:00:00.211) 0:00:00.456 ********** 2026-04-10 00:42:16.371157 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:42:16.371173 | orchestrator | 2026-04-10 00:42:16.371187 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:42:16.371203 | orchestrator | Friday 10 April 2026 00:42:10 +0000 (0:00:00.208) 0:00:00.665 ********** 2026-04-10 00:42:16.371242 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-10 00:42:16.371254 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-10 00:42:16.371263 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-10 00:42:16.371271 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-10 00:42:16.371280 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-10 00:42:16.371325 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-10 00:42:16.371337 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-10 00:42:16.371345 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-10 00:42:16.371354 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-10 00:42:16.371363 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-10 00:42:16.371371 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-10 00:42:16.371380 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-10 00:42:16.371388 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-10 00:42:16.371397 | orchestrator | 2026-04-10 00:42:16.371406 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:42:16.371415 | orchestrator | Friday 10 April 2026 00:42:10 +0000 (0:00:00.387) 0:00:01.052 ********** 2026-04-10 00:42:16.371426 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:16.371437 | orchestrator | 2026-04-10 00:42:16.371447 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:42:16.371457 | orchestrator | Friday 10 April 2026 00:42:11 +0000 (0:00:00.368) 0:00:01.421 ********** 2026-04-10 00:42:16.371466 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:16.371476 | orchestrator | 2026-04-10 00:42:16.371486 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:42:16.371496 | orchestrator | Friday 10 April 2026 00:42:11 +0000 (0:00:00.156) 0:00:01.577 ********** 2026-04-10 00:42:16.371506 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:16.371517 | orchestrator | 2026-04-10 00:42:16.371526 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:42:16.371537 | orchestrator | Friday 10 April 2026 00:42:11 +0000 (0:00:00.173) 0:00:01.751 ********** 2026-04-10 00:42:16.371546 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:16.371556 | orchestrator | 2026-04-10 00:42:16.371566 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:42:16.371576 | orchestrator | Friday 10 April 2026 00:42:11 +0000 (0:00:00.176) 0:00:01.927 ********** 2026-04-10 00:42:16.371586 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:16.371596 | orchestrator | 2026-04-10 00:42:16.371606 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:42:16.371615 | orchestrator | Friday 10 April 2026 00:42:11 +0000 (0:00:00.153) 0:00:02.081 ********** 2026-04-10 00:42:16.371626 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:16.371635 | orchestrator | 2026-04-10 00:42:16.371646 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:42:16.371656 | orchestrator | Friday 10 April 2026 00:42:12 +0000 (0:00:00.171) 0:00:02.252 ********** 2026-04-10 00:42:16.371664 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:16.371673 | orchestrator | 2026-04-10 00:42:16.371682 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:42:16.371690 | orchestrator | Friday 10 April 2026 00:42:12 +0000 (0:00:00.167) 0:00:02.420 ********** 2026-04-10 00:42:16.371699 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:16.371714 | orchestrator | 2026-04-10 00:42:16.371723 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:42:16.371731 | orchestrator | Friday 10 April 2026 00:42:12 +0000 (0:00:00.163) 0:00:02.584 ********** 2026-04-10 00:42:16.371740 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae) 2026-04-10 00:42:16.371750 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae) 2026-04-10 00:42:16.371758 | orchestrator | 2026-04-10 00:42:16.371767 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:42:16.371793 | orchestrator | Friday 10 April 2026 00:42:12 +0000 (0:00:00.396) 0:00:02.980 ********** 2026-04-10 00:42:16.371802 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7df1152f-d9d4-4643-860e-92853d20f14a) 2026-04-10 00:42:16.371811 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7df1152f-d9d4-4643-860e-92853d20f14a) 2026-04-10 00:42:16.371819 | orchestrator | 2026-04-10 00:42:16.371828 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:42:16.371837 | orchestrator | Friday 10 April 2026 00:42:13 +0000 (0:00:00.366) 0:00:03.347 ********** 2026-04-10 00:42:16.371845 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c799235e-1f4d-413e-847e-76a649e6822e) 2026-04-10 00:42:16.371854 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c799235e-1f4d-413e-847e-76a649e6822e) 2026-04-10 00:42:16.371862 | orchestrator | 2026-04-10 00:42:16.371871 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:42:16.371880 | orchestrator | Friday 10 April 2026 00:42:13 +0000 (0:00:00.538) 0:00:03.886 ********** 2026-04-10 00:42:16.371888 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_83ad9ae7-b217-4c7b-97e6-a7d535a7d755) 2026-04-10 00:42:16.371897 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_83ad9ae7-b217-4c7b-97e6-a7d535a7d755) 2026-04-10 00:42:16.371905 | orchestrator | 2026-04-10 00:42:16.371914 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:42:16.371922 | orchestrator | Friday 10 April 2026 00:42:14 +0000 (0:00:00.512) 0:00:04.399 ********** 2026-04-10 00:42:16.371931 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-10 00:42:16.371940 | orchestrator | 2026-04-10 00:42:16.371948 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:42:16.371957 | orchestrator | Friday 10 April 2026 00:42:14 +0000 (0:00:00.521) 0:00:04.921 ********** 2026-04-10 00:42:16.371965 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-10 00:42:16.371974 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-10 00:42:16.371983 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-10 00:42:16.371991 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-10 00:42:16.372000 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-10 00:42:16.372008 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-10 00:42:16.372017 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-10 00:42:16.372025 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-10 00:42:16.372034 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-10 00:42:16.372043 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-10 00:42:16.372059 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-10 00:42:16.372073 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-10 00:42:16.372094 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-10 00:42:16.372108 | orchestrator | 2026-04-10 00:42:16.372122 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:42:16.372135 | orchestrator | Friday 10 April 2026 00:42:15 +0000 (0:00:00.375) 0:00:05.296 ********** 2026-04-10 00:42:16.372149 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:16.372163 | orchestrator | 2026-04-10 00:42:16.372178 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:42:16.372192 | orchestrator | Friday 10 April 2026 00:42:15 +0000 (0:00:00.176) 0:00:05.472 ********** 2026-04-10 00:42:16.372208 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:16.372223 | orchestrator | 2026-04-10 00:42:16.372249 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:42:16.372259 | orchestrator | Friday 10 April 2026 00:42:15 +0000 (0:00:00.183) 0:00:05.655 ********** 2026-04-10 00:42:16.372267 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:16.372276 | orchestrator | 2026-04-10 00:42:16.372285 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:42:16.372293 | orchestrator | Friday 10 April 2026 00:42:15 +0000 (0:00:00.169) 0:00:05.824 ********** 2026-04-10 00:42:16.372325 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:16.372335 | orchestrator | 2026-04-10 00:42:16.372343 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:42:16.372352 | orchestrator | Friday 10 April 2026 00:42:15 +0000 (0:00:00.171) 0:00:05.996 ********** 2026-04-10 00:42:16.372361 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:16.372369 | orchestrator | 2026-04-10 00:42:16.372378 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:42:16.372387 | orchestrator | Friday 10 April 2026 00:42:15 +0000 (0:00:00.168) 0:00:06.165 ********** 2026-04-10 00:42:16.372395 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:16.372404 | orchestrator | 2026-04-10 00:42:16.372412 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:42:16.372421 | orchestrator | Friday 10 April 2026 00:42:16 +0000 (0:00:00.208) 0:00:06.373 ********** 2026-04-10 00:42:16.372430 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:16.372438 | orchestrator | 2026-04-10 00:42:16.372454 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:42:23.854837 | orchestrator | Friday 10 April 2026 00:42:16 +0000 (0:00:00.209) 0:00:06.583 ********** 2026-04-10 00:42:23.854940 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:23.854953 | orchestrator | 2026-04-10 00:42:23.854962 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:42:23.854970 | orchestrator | Friday 10 April 2026 00:42:16 +0000 (0:00:00.196) 0:00:06.780 ********** 2026-04-10 00:42:23.854978 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-10 00:42:23.854987 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-10 00:42:23.854995 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-10 00:42:23.855004 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-10 00:42:23.855011 | orchestrator | 2026-04-10 00:42:23.855020 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:42:23.855029 | orchestrator | Friday 10 April 2026 00:42:17 +0000 (0:00:00.949) 0:00:07.730 ********** 2026-04-10 00:42:23.855036 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:23.855045 | orchestrator | 2026-04-10 00:42:23.855052 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:42:23.855060 | orchestrator | Friday 10 April 2026 00:42:17 +0000 (0:00:00.178) 0:00:07.908 ********** 2026-04-10 00:42:23.855068 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:23.855076 | orchestrator | 2026-04-10 00:42:23.855084 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:42:23.855111 | orchestrator | Friday 10 April 2026 00:42:17 +0000 (0:00:00.173) 0:00:08.081 ********** 2026-04-10 00:42:23.855121 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:23.855128 | orchestrator | 2026-04-10 00:42:23.855137 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:42:23.855144 | orchestrator | Friday 10 April 2026 00:42:18 +0000 (0:00:00.188) 0:00:08.269 ********** 2026-04-10 00:42:23.855152 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:23.855159 | orchestrator | 2026-04-10 00:42:23.855180 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-10 00:42:23.855188 | orchestrator | Friday 10 April 2026 00:42:18 +0000 (0:00:00.169) 0:00:08.438 ********** 2026-04-10 00:42:23.855196 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:23.855203 | orchestrator | 2026-04-10 00:42:23.855211 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-10 00:42:23.855219 | orchestrator | Friday 10 April 2026 00:42:18 +0000 (0:00:00.104) 0:00:08.542 ********** 2026-04-10 00:42:23.855229 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4a24d887-4b45-578e-8445-fe6f68cb2659'}}) 2026-04-10 00:42:23.855237 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '83f5954c-7956-54fb-af17-18f84b92edf0'}}) 2026-04-10 00:42:23.855244 | orchestrator | 2026-04-10 00:42:23.855252 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-10 00:42:23.855260 | orchestrator | Friday 10 April 2026 00:42:18 +0000 (0:00:00.161) 0:00:08.704 ********** 2026-04-10 00:42:23.855269 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-4a24d887-4b45-578e-8445-fe6f68cb2659', 'data_vg': 'ceph-4a24d887-4b45-578e-8445-fe6f68cb2659'}) 2026-04-10 00:42:23.855279 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-83f5954c-7956-54fb-af17-18f84b92edf0', 'data_vg': 'ceph-83f5954c-7956-54fb-af17-18f84b92edf0'}) 2026-04-10 00:42:23.855288 | orchestrator | 2026-04-10 00:42:23.855296 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-10 00:42:23.855350 | orchestrator | Friday 10 April 2026 00:42:20 +0000 (0:00:01.904) 0:00:10.609 ********** 2026-04-10 00:42:23.855358 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4a24d887-4b45-578e-8445-fe6f68cb2659', 'data_vg': 'ceph-4a24d887-4b45-578e-8445-fe6f68cb2659'})  2026-04-10 00:42:23.855368 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83f5954c-7956-54fb-af17-18f84b92edf0', 'data_vg': 'ceph-83f5954c-7956-54fb-af17-18f84b92edf0'})  2026-04-10 00:42:23.855375 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:23.855382 | orchestrator | 2026-04-10 00:42:23.855388 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-10 00:42:23.855395 | orchestrator | Friday 10 April 2026 00:42:20 +0000 (0:00:00.172) 0:00:10.781 ********** 2026-04-10 00:42:23.855402 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-4a24d887-4b45-578e-8445-fe6f68cb2659', 'data_vg': 'ceph-4a24d887-4b45-578e-8445-fe6f68cb2659'}) 2026-04-10 00:42:23.855408 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-83f5954c-7956-54fb-af17-18f84b92edf0', 'data_vg': 'ceph-83f5954c-7956-54fb-af17-18f84b92edf0'}) 2026-04-10 00:42:23.855416 | orchestrator | 2026-04-10 00:42:23.855423 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-10 00:42:23.855431 | orchestrator | Friday 10 April 2026 00:42:22 +0000 (0:00:01.534) 0:00:12.315 ********** 2026-04-10 00:42:23.855438 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4a24d887-4b45-578e-8445-fe6f68cb2659', 'data_vg': 'ceph-4a24d887-4b45-578e-8445-fe6f68cb2659'})  2026-04-10 00:42:23.855445 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83f5954c-7956-54fb-af17-18f84b92edf0', 'data_vg': 'ceph-83f5954c-7956-54fb-af17-18f84b92edf0'})  2026-04-10 00:42:23.855453 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:23.855460 | orchestrator | 2026-04-10 00:42:23.855467 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-10 00:42:23.855483 | orchestrator | Friday 10 April 2026 00:42:22 +0000 (0:00:00.139) 0:00:12.454 ********** 2026-04-10 00:42:23.855509 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:23.855517 | orchestrator | 2026-04-10 00:42:23.855525 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-10 00:42:23.855533 | orchestrator | Friday 10 April 2026 00:42:22 +0000 (0:00:00.119) 0:00:12.574 ********** 2026-04-10 00:42:23.855540 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4a24d887-4b45-578e-8445-fe6f68cb2659', 'data_vg': 'ceph-4a24d887-4b45-578e-8445-fe6f68cb2659'})  2026-04-10 00:42:23.855548 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83f5954c-7956-54fb-af17-18f84b92edf0', 'data_vg': 'ceph-83f5954c-7956-54fb-af17-18f84b92edf0'})  2026-04-10 00:42:23.855555 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:23.855563 | orchestrator | 2026-04-10 00:42:23.855571 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-10 00:42:23.855578 | orchestrator | Friday 10 April 2026 00:42:22 +0000 (0:00:00.268) 0:00:12.843 ********** 2026-04-10 00:42:23.855586 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:23.855594 | orchestrator | 2026-04-10 00:42:23.855602 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-10 00:42:23.855609 | orchestrator | Friday 10 April 2026 00:42:22 +0000 (0:00:00.127) 0:00:12.970 ********** 2026-04-10 00:42:23.855616 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4a24d887-4b45-578e-8445-fe6f68cb2659', 'data_vg': 'ceph-4a24d887-4b45-578e-8445-fe6f68cb2659'})  2026-04-10 00:42:23.855623 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83f5954c-7956-54fb-af17-18f84b92edf0', 'data_vg': 'ceph-83f5954c-7956-54fb-af17-18f84b92edf0'})  2026-04-10 00:42:23.855630 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:23.855637 | orchestrator | 2026-04-10 00:42:23.855645 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-10 00:42:23.855653 | orchestrator | Friday 10 April 2026 00:42:22 +0000 (0:00:00.137) 0:00:13.107 ********** 2026-04-10 00:42:23.855659 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:23.855666 | orchestrator | 2026-04-10 00:42:23.855672 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-10 00:42:23.855678 | orchestrator | Friday 10 April 2026 00:42:23 +0000 (0:00:00.128) 0:00:13.236 ********** 2026-04-10 00:42:23.855685 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4a24d887-4b45-578e-8445-fe6f68cb2659', 'data_vg': 'ceph-4a24d887-4b45-578e-8445-fe6f68cb2659'})  2026-04-10 00:42:23.855692 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83f5954c-7956-54fb-af17-18f84b92edf0', 'data_vg': 'ceph-83f5954c-7956-54fb-af17-18f84b92edf0'})  2026-04-10 00:42:23.855699 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:23.855707 | orchestrator | 2026-04-10 00:42:23.855715 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-10 00:42:23.855722 | orchestrator | Friday 10 April 2026 00:42:23 +0000 (0:00:00.147) 0:00:13.383 ********** 2026-04-10 00:42:23.855730 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:42:23.855738 | orchestrator | 2026-04-10 00:42:23.855745 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-10 00:42:23.855754 | orchestrator | Friday 10 April 2026 00:42:23 +0000 (0:00:00.144) 0:00:13.528 ********** 2026-04-10 00:42:23.855761 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4a24d887-4b45-578e-8445-fe6f68cb2659', 'data_vg': 'ceph-4a24d887-4b45-578e-8445-fe6f68cb2659'})  2026-04-10 00:42:23.855769 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83f5954c-7956-54fb-af17-18f84b92edf0', 'data_vg': 'ceph-83f5954c-7956-54fb-af17-18f84b92edf0'})  2026-04-10 00:42:23.855776 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:23.855784 | orchestrator | 2026-04-10 00:42:23.855791 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-10 00:42:23.855806 | orchestrator | Friday 10 April 2026 00:42:23 +0000 (0:00:00.138) 0:00:13.666 ********** 2026-04-10 00:42:23.855814 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4a24d887-4b45-578e-8445-fe6f68cb2659', 'data_vg': 'ceph-4a24d887-4b45-578e-8445-fe6f68cb2659'})  2026-04-10 00:42:23.855822 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83f5954c-7956-54fb-af17-18f84b92edf0', 'data_vg': 'ceph-83f5954c-7956-54fb-af17-18f84b92edf0'})  2026-04-10 00:42:23.855829 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:23.855836 | orchestrator | 2026-04-10 00:42:23.855844 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-10 00:42:23.855852 | orchestrator | Friday 10 April 2026 00:42:23 +0000 (0:00:00.141) 0:00:13.807 ********** 2026-04-10 00:42:23.855860 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4a24d887-4b45-578e-8445-fe6f68cb2659', 'data_vg': 'ceph-4a24d887-4b45-578e-8445-fe6f68cb2659'})  2026-04-10 00:42:23.855868 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83f5954c-7956-54fb-af17-18f84b92edf0', 'data_vg': 'ceph-83f5954c-7956-54fb-af17-18f84b92edf0'})  2026-04-10 00:42:23.855875 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:23.855883 | orchestrator | 2026-04-10 00:42:23.855890 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-10 00:42:23.855899 | orchestrator | Friday 10 April 2026 00:42:23 +0000 (0:00:00.138) 0:00:13.946 ********** 2026-04-10 00:42:23.855915 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:23.855924 | orchestrator | 2026-04-10 00:42:23.855932 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-10 00:42:23.855945 | orchestrator | Friday 10 April 2026 00:42:23 +0000 (0:00:00.120) 0:00:14.066 ********** 2026-04-10 00:42:29.654894 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:29.655008 | orchestrator | 2026-04-10 00:42:29.655028 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-10 00:42:29.655044 | orchestrator | Friday 10 April 2026 00:42:23 +0000 (0:00:00.128) 0:00:14.194 ********** 2026-04-10 00:42:29.655057 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:29.655068 | orchestrator | 2026-04-10 00:42:29.655082 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-10 00:42:29.655095 | orchestrator | Friday 10 April 2026 00:42:24 +0000 (0:00:00.112) 0:00:14.307 ********** 2026-04-10 00:42:29.655108 | orchestrator | ok: [testbed-node-3] => { 2026-04-10 00:42:29.655119 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-10 00:42:29.655126 | orchestrator | } 2026-04-10 00:42:29.655134 | orchestrator | 2026-04-10 00:42:29.655142 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-10 00:42:29.655149 | orchestrator | Friday 10 April 2026 00:42:24 +0000 (0:00:00.257) 0:00:14.565 ********** 2026-04-10 00:42:29.655156 | orchestrator | ok: [testbed-node-3] => { 2026-04-10 00:42:29.655164 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-10 00:42:29.655171 | orchestrator | } 2026-04-10 00:42:29.655178 | orchestrator | 2026-04-10 00:42:29.655186 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-10 00:42:29.655193 | orchestrator | Friday 10 April 2026 00:42:24 +0000 (0:00:00.132) 0:00:14.697 ********** 2026-04-10 00:42:29.655200 | orchestrator | ok: [testbed-node-3] => { 2026-04-10 00:42:29.655208 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-10 00:42:29.655215 | orchestrator | } 2026-04-10 00:42:29.655222 | orchestrator | 2026-04-10 00:42:29.655230 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-10 00:42:29.655237 | orchestrator | Friday 10 April 2026 00:42:24 +0000 (0:00:00.129) 0:00:14.826 ********** 2026-04-10 00:42:29.655244 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:42:29.655252 | orchestrator | 2026-04-10 00:42:29.655273 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-10 00:42:29.655280 | orchestrator | Friday 10 April 2026 00:42:25 +0000 (0:00:00.623) 0:00:15.450 ********** 2026-04-10 00:42:29.655345 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:42:29.655354 | orchestrator | 2026-04-10 00:42:29.655361 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-10 00:42:29.655368 | orchestrator | Friday 10 April 2026 00:42:25 +0000 (0:00:00.507) 0:00:15.958 ********** 2026-04-10 00:42:29.655376 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:42:29.655383 | orchestrator | 2026-04-10 00:42:29.655390 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-10 00:42:29.655397 | orchestrator | Friday 10 April 2026 00:42:26 +0000 (0:00:00.558) 0:00:16.516 ********** 2026-04-10 00:42:29.655404 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:42:29.655412 | orchestrator | 2026-04-10 00:42:29.655419 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-10 00:42:29.655426 | orchestrator | Friday 10 April 2026 00:42:26 +0000 (0:00:00.132) 0:00:16.649 ********** 2026-04-10 00:42:29.655434 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:29.655443 | orchestrator | 2026-04-10 00:42:29.655451 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-10 00:42:29.655459 | orchestrator | Friday 10 April 2026 00:42:26 +0000 (0:00:00.117) 0:00:16.767 ********** 2026-04-10 00:42:29.655468 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:29.655475 | orchestrator | 2026-04-10 00:42:29.655484 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-10 00:42:29.655492 | orchestrator | Friday 10 April 2026 00:42:26 +0000 (0:00:00.105) 0:00:16.872 ********** 2026-04-10 00:42:29.655500 | orchestrator | ok: [testbed-node-3] => { 2026-04-10 00:42:29.655509 | orchestrator |  "vgs_report": { 2026-04-10 00:42:29.655518 | orchestrator |  "vg": [] 2026-04-10 00:42:29.655526 | orchestrator |  } 2026-04-10 00:42:29.655534 | orchestrator | } 2026-04-10 00:42:29.655542 | orchestrator | 2026-04-10 00:42:29.655550 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-10 00:42:29.655558 | orchestrator | Friday 10 April 2026 00:42:26 +0000 (0:00:00.130) 0:00:17.003 ********** 2026-04-10 00:42:29.655567 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:29.655574 | orchestrator | 2026-04-10 00:42:29.655582 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-10 00:42:29.655591 | orchestrator | Friday 10 April 2026 00:42:26 +0000 (0:00:00.136) 0:00:17.140 ********** 2026-04-10 00:42:29.655600 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:29.655608 | orchestrator | 2026-04-10 00:42:29.655616 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-10 00:42:29.655624 | orchestrator | Friday 10 April 2026 00:42:27 +0000 (0:00:00.123) 0:00:17.264 ********** 2026-04-10 00:42:29.655632 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:29.655639 | orchestrator | 2026-04-10 00:42:29.655648 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-10 00:42:29.655660 | orchestrator | Friday 10 April 2026 00:42:27 +0000 (0:00:00.283) 0:00:17.547 ********** 2026-04-10 00:42:29.655672 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:29.655691 | orchestrator | 2026-04-10 00:42:29.655707 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-10 00:42:29.655719 | orchestrator | Friday 10 April 2026 00:42:27 +0000 (0:00:00.122) 0:00:17.669 ********** 2026-04-10 00:42:29.655731 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:29.655742 | orchestrator | 2026-04-10 00:42:29.655754 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-10 00:42:29.655766 | orchestrator | Friday 10 April 2026 00:42:27 +0000 (0:00:00.119) 0:00:17.789 ********** 2026-04-10 00:42:29.655778 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:29.655790 | orchestrator | 2026-04-10 00:42:29.655803 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-10 00:42:29.655816 | orchestrator | Friday 10 April 2026 00:42:27 +0000 (0:00:00.113) 0:00:17.902 ********** 2026-04-10 00:42:29.655829 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:29.655852 | orchestrator | 2026-04-10 00:42:29.655860 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-10 00:42:29.655867 | orchestrator | Friday 10 April 2026 00:42:27 +0000 (0:00:00.132) 0:00:18.035 ********** 2026-04-10 00:42:29.655892 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:29.655900 | orchestrator | 2026-04-10 00:42:29.655907 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-10 00:42:29.655914 | orchestrator | Friday 10 April 2026 00:42:27 +0000 (0:00:00.111) 0:00:18.146 ********** 2026-04-10 00:42:29.655921 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:29.655928 | orchestrator | 2026-04-10 00:42:29.655935 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-10 00:42:29.655942 | orchestrator | Friday 10 April 2026 00:42:28 +0000 (0:00:00.138) 0:00:18.285 ********** 2026-04-10 00:42:29.655949 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:29.655957 | orchestrator | 2026-04-10 00:42:29.655964 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-10 00:42:29.655971 | orchestrator | Friday 10 April 2026 00:42:28 +0000 (0:00:00.151) 0:00:18.436 ********** 2026-04-10 00:42:29.655978 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:29.655985 | orchestrator | 2026-04-10 00:42:29.655992 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-10 00:42:29.655999 | orchestrator | Friday 10 April 2026 00:42:28 +0000 (0:00:00.122) 0:00:18.559 ********** 2026-04-10 00:42:29.656006 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:29.656013 | orchestrator | 2026-04-10 00:42:29.656020 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-10 00:42:29.656027 | orchestrator | Friday 10 April 2026 00:42:28 +0000 (0:00:00.118) 0:00:18.677 ********** 2026-04-10 00:42:29.656034 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:29.656041 | orchestrator | 2026-04-10 00:42:29.656049 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-10 00:42:29.656056 | orchestrator | Friday 10 April 2026 00:42:28 +0000 (0:00:00.122) 0:00:18.800 ********** 2026-04-10 00:42:29.656063 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:29.656070 | orchestrator | 2026-04-10 00:42:29.656084 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-10 00:42:29.656091 | orchestrator | Friday 10 April 2026 00:42:28 +0000 (0:00:00.120) 0:00:18.921 ********** 2026-04-10 00:42:29.656100 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4a24d887-4b45-578e-8445-fe6f68cb2659', 'data_vg': 'ceph-4a24d887-4b45-578e-8445-fe6f68cb2659'})  2026-04-10 00:42:29.656109 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83f5954c-7956-54fb-af17-18f84b92edf0', 'data_vg': 'ceph-83f5954c-7956-54fb-af17-18f84b92edf0'})  2026-04-10 00:42:29.656116 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:29.656123 | orchestrator | 2026-04-10 00:42:29.656130 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-10 00:42:29.656138 | orchestrator | Friday 10 April 2026 00:42:28 +0000 (0:00:00.170) 0:00:19.092 ********** 2026-04-10 00:42:29.656145 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4a24d887-4b45-578e-8445-fe6f68cb2659', 'data_vg': 'ceph-4a24d887-4b45-578e-8445-fe6f68cb2659'})  2026-04-10 00:42:29.656152 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83f5954c-7956-54fb-af17-18f84b92edf0', 'data_vg': 'ceph-83f5954c-7956-54fb-af17-18f84b92edf0'})  2026-04-10 00:42:29.656159 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:29.656167 | orchestrator | 2026-04-10 00:42:29.656174 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-10 00:42:29.656181 | orchestrator | Friday 10 April 2026 00:42:29 +0000 (0:00:00.287) 0:00:19.379 ********** 2026-04-10 00:42:29.656188 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4a24d887-4b45-578e-8445-fe6f68cb2659', 'data_vg': 'ceph-4a24d887-4b45-578e-8445-fe6f68cb2659'})  2026-04-10 00:42:29.656196 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83f5954c-7956-54fb-af17-18f84b92edf0', 'data_vg': 'ceph-83f5954c-7956-54fb-af17-18f84b92edf0'})  2026-04-10 00:42:29.656208 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:29.656215 | orchestrator | 2026-04-10 00:42:29.656222 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-10 00:42:29.656229 | orchestrator | Friday 10 April 2026 00:42:29 +0000 (0:00:00.143) 0:00:19.523 ********** 2026-04-10 00:42:29.656236 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4a24d887-4b45-578e-8445-fe6f68cb2659', 'data_vg': 'ceph-4a24d887-4b45-578e-8445-fe6f68cb2659'})  2026-04-10 00:42:29.656244 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83f5954c-7956-54fb-af17-18f84b92edf0', 'data_vg': 'ceph-83f5954c-7956-54fb-af17-18f84b92edf0'})  2026-04-10 00:42:29.656251 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:29.656258 | orchestrator | 2026-04-10 00:42:29.656265 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-10 00:42:29.656272 | orchestrator | Friday 10 April 2026 00:42:29 +0000 (0:00:00.144) 0:00:19.667 ********** 2026-04-10 00:42:29.656279 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4a24d887-4b45-578e-8445-fe6f68cb2659', 'data_vg': 'ceph-4a24d887-4b45-578e-8445-fe6f68cb2659'})  2026-04-10 00:42:29.656287 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83f5954c-7956-54fb-af17-18f84b92edf0', 'data_vg': 'ceph-83f5954c-7956-54fb-af17-18f84b92edf0'})  2026-04-10 00:42:29.656294 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:29.656326 | orchestrator | 2026-04-10 00:42:29.656335 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-10 00:42:29.656342 | orchestrator | Friday 10 April 2026 00:42:29 +0000 (0:00:00.146) 0:00:19.814 ********** 2026-04-10 00:42:29.656355 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4a24d887-4b45-578e-8445-fe6f68cb2659', 'data_vg': 'ceph-4a24d887-4b45-578e-8445-fe6f68cb2659'})  2026-04-10 00:42:34.843999 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83f5954c-7956-54fb-af17-18f84b92edf0', 'data_vg': 'ceph-83f5954c-7956-54fb-af17-18f84b92edf0'})  2026-04-10 00:42:34.844090 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:34.844099 | orchestrator | 2026-04-10 00:42:34.844106 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-10 00:42:34.844113 | orchestrator | Friday 10 April 2026 00:42:29 +0000 (0:00:00.130) 0:00:19.944 ********** 2026-04-10 00:42:34.844119 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4a24d887-4b45-578e-8445-fe6f68cb2659', 'data_vg': 'ceph-4a24d887-4b45-578e-8445-fe6f68cb2659'})  2026-04-10 00:42:34.844125 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83f5954c-7956-54fb-af17-18f84b92edf0', 'data_vg': 'ceph-83f5954c-7956-54fb-af17-18f84b92edf0'})  2026-04-10 00:42:34.844130 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:34.844135 | orchestrator | 2026-04-10 00:42:34.844140 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-10 00:42:34.844144 | orchestrator | Friday 10 April 2026 00:42:29 +0000 (0:00:00.133) 0:00:20.078 ********** 2026-04-10 00:42:34.844149 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4a24d887-4b45-578e-8445-fe6f68cb2659', 'data_vg': 'ceph-4a24d887-4b45-578e-8445-fe6f68cb2659'})  2026-04-10 00:42:34.844154 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83f5954c-7956-54fb-af17-18f84b92edf0', 'data_vg': 'ceph-83f5954c-7956-54fb-af17-18f84b92edf0'})  2026-04-10 00:42:34.844159 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:34.844164 | orchestrator | 2026-04-10 00:42:34.844169 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-10 00:42:34.844174 | orchestrator | Friday 10 April 2026 00:42:30 +0000 (0:00:00.155) 0:00:20.233 ********** 2026-04-10 00:42:34.844179 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:42:34.844185 | orchestrator | 2026-04-10 00:42:34.844207 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-10 00:42:34.844212 | orchestrator | Friday 10 April 2026 00:42:30 +0000 (0:00:00.512) 0:00:20.746 ********** 2026-04-10 00:42:34.844217 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:42:34.844221 | orchestrator | 2026-04-10 00:42:34.844226 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-10 00:42:34.844244 | orchestrator | Friday 10 April 2026 00:42:31 +0000 (0:00:00.482) 0:00:21.228 ********** 2026-04-10 00:42:34.844249 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:42:34.844254 | orchestrator | 2026-04-10 00:42:34.844259 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-10 00:42:34.844264 | orchestrator | Friday 10 April 2026 00:42:31 +0000 (0:00:00.135) 0:00:21.364 ********** 2026-04-10 00:42:34.844269 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-4a24d887-4b45-578e-8445-fe6f68cb2659', 'vg_name': 'ceph-4a24d887-4b45-578e-8445-fe6f68cb2659'}) 2026-04-10 00:42:34.844275 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-83f5954c-7956-54fb-af17-18f84b92edf0', 'vg_name': 'ceph-83f5954c-7956-54fb-af17-18f84b92edf0'}) 2026-04-10 00:42:34.844280 | orchestrator | 2026-04-10 00:42:34.844285 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-10 00:42:34.844290 | orchestrator | Friday 10 April 2026 00:42:31 +0000 (0:00:00.176) 0:00:21.540 ********** 2026-04-10 00:42:34.844295 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4a24d887-4b45-578e-8445-fe6f68cb2659', 'data_vg': 'ceph-4a24d887-4b45-578e-8445-fe6f68cb2659'})  2026-04-10 00:42:34.844300 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83f5954c-7956-54fb-af17-18f84b92edf0', 'data_vg': 'ceph-83f5954c-7956-54fb-af17-18f84b92edf0'})  2026-04-10 00:42:34.844322 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:34.844327 | orchestrator | 2026-04-10 00:42:34.844332 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-10 00:42:34.844336 | orchestrator | Friday 10 April 2026 00:42:31 +0000 (0:00:00.128) 0:00:21.668 ********** 2026-04-10 00:42:34.844341 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4a24d887-4b45-578e-8445-fe6f68cb2659', 'data_vg': 'ceph-4a24d887-4b45-578e-8445-fe6f68cb2659'})  2026-04-10 00:42:34.844346 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83f5954c-7956-54fb-af17-18f84b92edf0', 'data_vg': 'ceph-83f5954c-7956-54fb-af17-18f84b92edf0'})  2026-04-10 00:42:34.844351 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:34.844356 | orchestrator | 2026-04-10 00:42:34.844361 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-10 00:42:34.844365 | orchestrator | Friday 10 April 2026 00:42:31 +0000 (0:00:00.326) 0:00:21.994 ********** 2026-04-10 00:42:34.844370 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4a24d887-4b45-578e-8445-fe6f68cb2659', 'data_vg': 'ceph-4a24d887-4b45-578e-8445-fe6f68cb2659'})  2026-04-10 00:42:34.844375 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-83f5954c-7956-54fb-af17-18f84b92edf0', 'data_vg': 'ceph-83f5954c-7956-54fb-af17-18f84b92edf0'})  2026-04-10 00:42:34.844380 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:42:34.844385 | orchestrator | 2026-04-10 00:42:34.844389 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-10 00:42:34.844394 | orchestrator | Friday 10 April 2026 00:42:31 +0000 (0:00:00.146) 0:00:22.141 ********** 2026-04-10 00:42:34.844411 | orchestrator | ok: [testbed-node-3] => { 2026-04-10 00:42:34.844417 | orchestrator |  "lvm_report": { 2026-04-10 00:42:34.844422 | orchestrator |  "lv": [ 2026-04-10 00:42:34.844427 | orchestrator |  { 2026-04-10 00:42:34.844432 | orchestrator |  "lv_name": "osd-block-4a24d887-4b45-578e-8445-fe6f68cb2659", 2026-04-10 00:42:34.844438 | orchestrator |  "vg_name": "ceph-4a24d887-4b45-578e-8445-fe6f68cb2659" 2026-04-10 00:42:34.844443 | orchestrator |  }, 2026-04-10 00:42:34.844453 | orchestrator |  { 2026-04-10 00:42:34.844458 | orchestrator |  "lv_name": "osd-block-83f5954c-7956-54fb-af17-18f84b92edf0", 2026-04-10 00:42:34.844463 | orchestrator |  "vg_name": "ceph-83f5954c-7956-54fb-af17-18f84b92edf0" 2026-04-10 00:42:34.844468 | orchestrator |  } 2026-04-10 00:42:34.844473 | orchestrator |  ], 2026-04-10 00:42:34.844478 | orchestrator |  "pv": [ 2026-04-10 00:42:34.844482 | orchestrator |  { 2026-04-10 00:42:34.844487 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-10 00:42:34.844492 | orchestrator |  "vg_name": "ceph-4a24d887-4b45-578e-8445-fe6f68cb2659" 2026-04-10 00:42:34.844497 | orchestrator |  }, 2026-04-10 00:42:34.844502 | orchestrator |  { 2026-04-10 00:42:34.844506 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-10 00:42:34.844511 | orchestrator |  "vg_name": "ceph-83f5954c-7956-54fb-af17-18f84b92edf0" 2026-04-10 00:42:34.844516 | orchestrator |  } 2026-04-10 00:42:34.844521 | orchestrator |  ] 2026-04-10 00:42:34.844526 | orchestrator |  } 2026-04-10 00:42:34.844531 | orchestrator | } 2026-04-10 00:42:34.844536 | orchestrator | 2026-04-10 00:42:34.844541 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-10 00:42:34.844546 | orchestrator | 2026-04-10 00:42:34.844552 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-10 00:42:34.844561 | orchestrator | Friday 10 April 2026 00:42:32 +0000 (0:00:00.239) 0:00:22.381 ********** 2026-04-10 00:42:34.844567 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-10 00:42:34.844572 | orchestrator | 2026-04-10 00:42:34.844578 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-10 00:42:34.844583 | orchestrator | Friday 10 April 2026 00:42:32 +0000 (0:00:00.218) 0:00:22.600 ********** 2026-04-10 00:42:34.844589 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:42:34.844595 | orchestrator | 2026-04-10 00:42:34.844600 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:42:34.844606 | orchestrator | Friday 10 April 2026 00:42:32 +0000 (0:00:00.202) 0:00:22.802 ********** 2026-04-10 00:42:34.844611 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-10 00:42:34.844617 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-10 00:42:34.844622 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-10 00:42:34.844628 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-10 00:42:34.844634 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-10 00:42:34.844639 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-10 00:42:34.844645 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-10 00:42:34.844651 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-10 00:42:34.844656 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-10 00:42:34.844662 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-10 00:42:34.844667 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-10 00:42:34.844673 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-10 00:42:34.844678 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-10 00:42:34.844684 | orchestrator | 2026-04-10 00:42:34.844689 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:42:34.844695 | orchestrator | Friday 10 April 2026 00:42:33 +0000 (0:00:00.415) 0:00:23.218 ********** 2026-04-10 00:42:34.844701 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:34.844711 | orchestrator | 2026-04-10 00:42:34.844716 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:42:34.844722 | orchestrator | Friday 10 April 2026 00:42:33 +0000 (0:00:00.214) 0:00:23.433 ********** 2026-04-10 00:42:34.844727 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:34.844733 | orchestrator | 2026-04-10 00:42:34.844738 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:42:34.844744 | orchestrator | Friday 10 April 2026 00:42:33 +0000 (0:00:00.203) 0:00:23.637 ********** 2026-04-10 00:42:34.844749 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:34.844755 | orchestrator | 2026-04-10 00:42:34.844760 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:42:34.844766 | orchestrator | Friday 10 April 2026 00:42:33 +0000 (0:00:00.237) 0:00:23.874 ********** 2026-04-10 00:42:34.844771 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:34.844776 | orchestrator | 2026-04-10 00:42:34.844782 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:42:34.844787 | orchestrator | Friday 10 April 2026 00:42:34 +0000 (0:00:00.704) 0:00:24.578 ********** 2026-04-10 00:42:34.844793 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:34.844798 | orchestrator | 2026-04-10 00:42:34.844804 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:42:34.844809 | orchestrator | Friday 10 April 2026 00:42:34 +0000 (0:00:00.251) 0:00:24.830 ********** 2026-04-10 00:42:34.844815 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:34.844820 | orchestrator | 2026-04-10 00:42:34.844829 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:42:45.459028 | orchestrator | Friday 10 April 2026 00:42:34 +0000 (0:00:00.224) 0:00:25.055 ********** 2026-04-10 00:42:45.459109 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:45.459118 | orchestrator | 2026-04-10 00:42:45.459123 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:42:45.459128 | orchestrator | Friday 10 April 2026 00:42:35 +0000 (0:00:00.222) 0:00:25.277 ********** 2026-04-10 00:42:45.459132 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:45.459137 | orchestrator | 2026-04-10 00:42:45.459141 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:42:45.459145 | orchestrator | Friday 10 April 2026 00:42:35 +0000 (0:00:00.214) 0:00:25.492 ********** 2026-04-10 00:42:45.459150 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762) 2026-04-10 00:42:45.459155 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762) 2026-04-10 00:42:45.459159 | orchestrator | 2026-04-10 00:42:45.459163 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:42:45.459167 | orchestrator | Friday 10 April 2026 00:42:35 +0000 (0:00:00.415) 0:00:25.907 ********** 2026-04-10 00:42:45.459171 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_abddbba1-0dc8-4b4d-8c33-018af0530e23) 2026-04-10 00:42:45.459176 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_abddbba1-0dc8-4b4d-8c33-018af0530e23) 2026-04-10 00:42:45.459180 | orchestrator | 2026-04-10 00:42:45.459184 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:42:45.459188 | orchestrator | Friday 10 April 2026 00:42:36 +0000 (0:00:00.398) 0:00:26.306 ********** 2026-04-10 00:42:45.459192 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_02e5e60d-aa8c-49f3-b265-76760abc52dd) 2026-04-10 00:42:45.459197 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_02e5e60d-aa8c-49f3-b265-76760abc52dd) 2026-04-10 00:42:45.459201 | orchestrator | 2026-04-10 00:42:45.459205 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:42:45.459209 | orchestrator | Friday 10 April 2026 00:42:36 +0000 (0:00:00.465) 0:00:26.772 ********** 2026-04-10 00:42:45.459213 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_42dd6803-c84e-4757-aa8c-571b5d9cbc16) 2026-04-10 00:42:45.459232 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_42dd6803-c84e-4757-aa8c-571b5d9cbc16) 2026-04-10 00:42:45.459236 | orchestrator | 2026-04-10 00:42:45.459241 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:42:45.459245 | orchestrator | Friday 10 April 2026 00:42:37 +0000 (0:00:00.470) 0:00:27.243 ********** 2026-04-10 00:42:45.459249 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-10 00:42:45.459253 | orchestrator | 2026-04-10 00:42:45.459257 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:42:45.459261 | orchestrator | Friday 10 April 2026 00:42:37 +0000 (0:00:00.331) 0:00:27.574 ********** 2026-04-10 00:42:45.459265 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-10 00:42:45.459270 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-10 00:42:45.459274 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-10 00:42:45.459278 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-10 00:42:45.459282 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-10 00:42:45.459304 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-10 00:42:45.459372 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-10 00:42:45.459380 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-10 00:42:45.459387 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-10 00:42:45.459394 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-10 00:42:45.459401 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-10 00:42:45.459408 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-10 00:42:45.459415 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-10 00:42:45.459421 | orchestrator | 2026-04-10 00:42:45.459425 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:42:45.459430 | orchestrator | Friday 10 April 2026 00:42:37 +0000 (0:00:00.576) 0:00:28.151 ********** 2026-04-10 00:42:45.459434 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:45.459438 | orchestrator | 2026-04-10 00:42:45.459442 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:42:45.459446 | orchestrator | Friday 10 April 2026 00:42:38 +0000 (0:00:00.178) 0:00:28.329 ********** 2026-04-10 00:42:45.459450 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:45.459454 | orchestrator | 2026-04-10 00:42:45.459458 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:42:45.459462 | orchestrator | Friday 10 April 2026 00:42:38 +0000 (0:00:00.175) 0:00:28.505 ********** 2026-04-10 00:42:45.459466 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:45.459470 | orchestrator | 2026-04-10 00:42:45.459488 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:42:45.459493 | orchestrator | Friday 10 April 2026 00:42:38 +0000 (0:00:00.180) 0:00:28.685 ********** 2026-04-10 00:42:45.459497 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:45.459501 | orchestrator | 2026-04-10 00:42:45.459505 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:42:45.459509 | orchestrator | Friday 10 April 2026 00:42:38 +0000 (0:00:00.175) 0:00:28.861 ********** 2026-04-10 00:42:45.459513 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:45.459517 | orchestrator | 2026-04-10 00:42:45.459521 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:42:45.459532 | orchestrator | Friday 10 April 2026 00:42:38 +0000 (0:00:00.187) 0:00:29.049 ********** 2026-04-10 00:42:45.459536 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:45.459540 | orchestrator | 2026-04-10 00:42:45.459544 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:42:45.459549 | orchestrator | Friday 10 April 2026 00:42:39 +0000 (0:00:00.184) 0:00:29.233 ********** 2026-04-10 00:42:45.459554 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:45.459559 | orchestrator | 2026-04-10 00:42:45.459563 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:42:45.459568 | orchestrator | Friday 10 April 2026 00:42:39 +0000 (0:00:00.191) 0:00:29.425 ********** 2026-04-10 00:42:45.459586 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:45.459591 | orchestrator | 2026-04-10 00:42:45.459595 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:42:45.459603 | orchestrator | Friday 10 April 2026 00:42:39 +0000 (0:00:00.196) 0:00:29.621 ********** 2026-04-10 00:42:45.459608 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-10 00:42:45.459612 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-10 00:42:45.459618 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-10 00:42:45.459622 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-10 00:42:45.459627 | orchestrator | 2026-04-10 00:42:45.459631 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:42:45.459636 | orchestrator | Friday 10 April 2026 00:42:40 +0000 (0:00:00.845) 0:00:30.467 ********** 2026-04-10 00:42:45.459641 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:45.459645 | orchestrator | 2026-04-10 00:42:45.459650 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:42:45.459655 | orchestrator | Friday 10 April 2026 00:42:40 +0000 (0:00:00.222) 0:00:30.689 ********** 2026-04-10 00:42:45.459659 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:45.459664 | orchestrator | 2026-04-10 00:42:45.459668 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:42:45.459673 | orchestrator | Friday 10 April 2026 00:42:40 +0000 (0:00:00.224) 0:00:30.913 ********** 2026-04-10 00:42:45.459677 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:45.459682 | orchestrator | 2026-04-10 00:42:45.459686 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:42:45.459691 | orchestrator | Friday 10 April 2026 00:42:41 +0000 (0:00:00.956) 0:00:31.869 ********** 2026-04-10 00:42:45.459696 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:45.459700 | orchestrator | 2026-04-10 00:42:45.459705 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-10 00:42:45.459710 | orchestrator | Friday 10 April 2026 00:42:41 +0000 (0:00:00.207) 0:00:32.077 ********** 2026-04-10 00:42:45.459715 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:45.459719 | orchestrator | 2026-04-10 00:42:45.459724 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-10 00:42:45.459729 | orchestrator | Friday 10 April 2026 00:42:41 +0000 (0:00:00.130) 0:00:32.208 ********** 2026-04-10 00:42:45.459733 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '465b2d07-90ab-575b-b156-9a24eede9b64'}}) 2026-04-10 00:42:45.459738 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a684d377-5ec1-594b-83a4-e92528b1ce81'}}) 2026-04-10 00:42:45.459743 | orchestrator | 2026-04-10 00:42:45.459748 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-10 00:42:45.459752 | orchestrator | Friday 10 April 2026 00:42:42 +0000 (0:00:00.192) 0:00:32.400 ********** 2026-04-10 00:42:45.459758 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-465b2d07-90ab-575b-b156-9a24eede9b64', 'data_vg': 'ceph-465b2d07-90ab-575b-b156-9a24eede9b64'}) 2026-04-10 00:42:45.459765 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a684d377-5ec1-594b-83a4-e92528b1ce81', 'data_vg': 'ceph-a684d377-5ec1-594b-83a4-e92528b1ce81'}) 2026-04-10 00:42:45.459773 | orchestrator | 2026-04-10 00:42:45.459778 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-10 00:42:45.459782 | orchestrator | Friday 10 April 2026 00:42:44 +0000 (0:00:01.819) 0:00:34.220 ********** 2026-04-10 00:42:45.459787 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-465b2d07-90ab-575b-b156-9a24eede9b64', 'data_vg': 'ceph-465b2d07-90ab-575b-b156-9a24eede9b64'})  2026-04-10 00:42:45.459793 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a684d377-5ec1-594b-83a4-e92528b1ce81', 'data_vg': 'ceph-a684d377-5ec1-594b-83a4-e92528b1ce81'})  2026-04-10 00:42:45.459797 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:45.459802 | orchestrator | 2026-04-10 00:42:45.459807 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-10 00:42:45.459811 | orchestrator | Friday 10 April 2026 00:42:44 +0000 (0:00:00.158) 0:00:34.378 ********** 2026-04-10 00:42:45.459816 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-465b2d07-90ab-575b-b156-9a24eede9b64', 'data_vg': 'ceph-465b2d07-90ab-575b-b156-9a24eede9b64'}) 2026-04-10 00:42:45.459824 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a684d377-5ec1-594b-83a4-e92528b1ce81', 'data_vg': 'ceph-a684d377-5ec1-594b-83a4-e92528b1ce81'}) 2026-04-10 00:42:51.156102 | orchestrator | 2026-04-10 00:42:51.156196 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-10 00:42:51.156208 | orchestrator | Friday 10 April 2026 00:42:45 +0000 (0:00:01.364) 0:00:35.743 ********** 2026-04-10 00:42:51.156216 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-465b2d07-90ab-575b-b156-9a24eede9b64', 'data_vg': 'ceph-465b2d07-90ab-575b-b156-9a24eede9b64'})  2026-04-10 00:42:51.156225 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a684d377-5ec1-594b-83a4-e92528b1ce81', 'data_vg': 'ceph-a684d377-5ec1-594b-83a4-e92528b1ce81'})  2026-04-10 00:42:51.156232 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:51.156241 | orchestrator | 2026-04-10 00:42:51.156248 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-10 00:42:51.156255 | orchestrator | Friday 10 April 2026 00:42:45 +0000 (0:00:00.186) 0:00:35.929 ********** 2026-04-10 00:42:51.156262 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:51.156268 | orchestrator | 2026-04-10 00:42:51.156275 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-10 00:42:51.156282 | orchestrator | Friday 10 April 2026 00:42:45 +0000 (0:00:00.163) 0:00:36.093 ********** 2026-04-10 00:42:51.156302 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-465b2d07-90ab-575b-b156-9a24eede9b64', 'data_vg': 'ceph-465b2d07-90ab-575b-b156-9a24eede9b64'})  2026-04-10 00:42:51.156361 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a684d377-5ec1-594b-83a4-e92528b1ce81', 'data_vg': 'ceph-a684d377-5ec1-594b-83a4-e92528b1ce81'})  2026-04-10 00:42:51.156369 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:51.156376 | orchestrator | 2026-04-10 00:42:51.156383 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-10 00:42:51.156389 | orchestrator | Friday 10 April 2026 00:42:46 +0000 (0:00:00.188) 0:00:36.282 ********** 2026-04-10 00:42:51.156396 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:51.156403 | orchestrator | 2026-04-10 00:42:51.156410 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-10 00:42:51.156416 | orchestrator | Friday 10 April 2026 00:42:46 +0000 (0:00:00.151) 0:00:36.433 ********** 2026-04-10 00:42:51.156423 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-465b2d07-90ab-575b-b156-9a24eede9b64', 'data_vg': 'ceph-465b2d07-90ab-575b-b156-9a24eede9b64'})  2026-04-10 00:42:51.156430 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a684d377-5ec1-594b-83a4-e92528b1ce81', 'data_vg': 'ceph-a684d377-5ec1-594b-83a4-e92528b1ce81'})  2026-04-10 00:42:51.156457 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:51.156464 | orchestrator | 2026-04-10 00:42:51.156471 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-10 00:42:51.156478 | orchestrator | Friday 10 April 2026 00:42:46 +0000 (0:00:00.166) 0:00:36.600 ********** 2026-04-10 00:42:51.156484 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:51.156492 | orchestrator | 2026-04-10 00:42:51.156499 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-10 00:42:51.156505 | orchestrator | Friday 10 April 2026 00:42:46 +0000 (0:00:00.341) 0:00:36.941 ********** 2026-04-10 00:42:51.156512 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-465b2d07-90ab-575b-b156-9a24eede9b64', 'data_vg': 'ceph-465b2d07-90ab-575b-b156-9a24eede9b64'})  2026-04-10 00:42:51.156519 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a684d377-5ec1-594b-83a4-e92528b1ce81', 'data_vg': 'ceph-a684d377-5ec1-594b-83a4-e92528b1ce81'})  2026-04-10 00:42:51.156526 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:51.156532 | orchestrator | 2026-04-10 00:42:51.156539 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-10 00:42:51.156546 | orchestrator | Friday 10 April 2026 00:42:46 +0000 (0:00:00.167) 0:00:37.109 ********** 2026-04-10 00:42:51.156552 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:42:51.156560 | orchestrator | 2026-04-10 00:42:51.156567 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-10 00:42:51.156574 | orchestrator | Friday 10 April 2026 00:42:47 +0000 (0:00:00.148) 0:00:37.258 ********** 2026-04-10 00:42:51.156590 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-465b2d07-90ab-575b-b156-9a24eede9b64', 'data_vg': 'ceph-465b2d07-90ab-575b-b156-9a24eede9b64'})  2026-04-10 00:42:51.156598 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a684d377-5ec1-594b-83a4-e92528b1ce81', 'data_vg': 'ceph-a684d377-5ec1-594b-83a4-e92528b1ce81'})  2026-04-10 00:42:51.156612 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:51.156619 | orchestrator | 2026-04-10 00:42:51.156625 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-10 00:42:51.156632 | orchestrator | Friday 10 April 2026 00:42:47 +0000 (0:00:00.170) 0:00:37.428 ********** 2026-04-10 00:42:51.156640 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-465b2d07-90ab-575b-b156-9a24eede9b64', 'data_vg': 'ceph-465b2d07-90ab-575b-b156-9a24eede9b64'})  2026-04-10 00:42:51.156648 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a684d377-5ec1-594b-83a4-e92528b1ce81', 'data_vg': 'ceph-a684d377-5ec1-594b-83a4-e92528b1ce81'})  2026-04-10 00:42:51.156655 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:51.156663 | orchestrator | 2026-04-10 00:42:51.156670 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-10 00:42:51.156692 | orchestrator | Friday 10 April 2026 00:42:47 +0000 (0:00:00.168) 0:00:37.597 ********** 2026-04-10 00:42:51.156700 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-465b2d07-90ab-575b-b156-9a24eede9b64', 'data_vg': 'ceph-465b2d07-90ab-575b-b156-9a24eede9b64'})  2026-04-10 00:42:51.156708 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a684d377-5ec1-594b-83a4-e92528b1ce81', 'data_vg': 'ceph-a684d377-5ec1-594b-83a4-e92528b1ce81'})  2026-04-10 00:42:51.156716 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:51.156723 | orchestrator | 2026-04-10 00:42:51.156731 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-10 00:42:51.156738 | orchestrator | Friday 10 April 2026 00:42:47 +0000 (0:00:00.172) 0:00:37.769 ********** 2026-04-10 00:42:51.156746 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:51.156754 | orchestrator | 2026-04-10 00:42:51.156761 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-10 00:42:51.156769 | orchestrator | Friday 10 April 2026 00:42:47 +0000 (0:00:00.156) 0:00:37.926 ********** 2026-04-10 00:42:51.156782 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:51.156790 | orchestrator | 2026-04-10 00:42:51.156797 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-10 00:42:51.156807 | orchestrator | Friday 10 April 2026 00:42:47 +0000 (0:00:00.133) 0:00:38.060 ********** 2026-04-10 00:42:51.156814 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:51.156821 | orchestrator | 2026-04-10 00:42:51.156827 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-10 00:42:51.156834 | orchestrator | Friday 10 April 2026 00:42:47 +0000 (0:00:00.141) 0:00:38.201 ********** 2026-04-10 00:42:51.156841 | orchestrator | ok: [testbed-node-4] => { 2026-04-10 00:42:51.156848 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-10 00:42:51.156855 | orchestrator | } 2026-04-10 00:42:51.156862 | orchestrator | 2026-04-10 00:42:51.156868 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-10 00:42:51.156875 | orchestrator | Friday 10 April 2026 00:42:48 +0000 (0:00:00.171) 0:00:38.373 ********** 2026-04-10 00:42:51.156882 | orchestrator | ok: [testbed-node-4] => { 2026-04-10 00:42:51.156888 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-10 00:42:51.156895 | orchestrator | } 2026-04-10 00:42:51.156902 | orchestrator | 2026-04-10 00:42:51.156908 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-10 00:42:51.156915 | orchestrator | Friday 10 April 2026 00:42:48 +0000 (0:00:00.145) 0:00:38.519 ********** 2026-04-10 00:42:51.156922 | orchestrator | ok: [testbed-node-4] => { 2026-04-10 00:42:51.156928 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-10 00:42:51.156935 | orchestrator | } 2026-04-10 00:42:51.156942 | orchestrator | 2026-04-10 00:42:51.156948 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-10 00:42:51.156955 | orchestrator | Friday 10 April 2026 00:42:48 +0000 (0:00:00.168) 0:00:38.688 ********** 2026-04-10 00:42:51.156962 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:42:51.156968 | orchestrator | 2026-04-10 00:42:51.156975 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-10 00:42:51.156981 | orchestrator | Friday 10 April 2026 00:42:49 +0000 (0:00:00.751) 0:00:39.439 ********** 2026-04-10 00:42:51.156988 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:42:51.156994 | orchestrator | 2026-04-10 00:42:51.157001 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-10 00:42:51.157008 | orchestrator | Friday 10 April 2026 00:42:49 +0000 (0:00:00.503) 0:00:39.943 ********** 2026-04-10 00:42:51.157014 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:42:51.157021 | orchestrator | 2026-04-10 00:42:51.157028 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-10 00:42:51.157034 | orchestrator | Friday 10 April 2026 00:42:50 +0000 (0:00:00.490) 0:00:40.433 ********** 2026-04-10 00:42:51.157041 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:42:51.157047 | orchestrator | 2026-04-10 00:42:51.157054 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-10 00:42:51.157061 | orchestrator | Friday 10 April 2026 00:42:50 +0000 (0:00:00.159) 0:00:40.592 ********** 2026-04-10 00:42:51.157067 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:51.157074 | orchestrator | 2026-04-10 00:42:51.157081 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-10 00:42:51.157087 | orchestrator | Friday 10 April 2026 00:42:50 +0000 (0:00:00.100) 0:00:40.693 ********** 2026-04-10 00:42:51.157094 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:51.157101 | orchestrator | 2026-04-10 00:42:51.157107 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-10 00:42:51.157114 | orchestrator | Friday 10 April 2026 00:42:50 +0000 (0:00:00.091) 0:00:40.784 ********** 2026-04-10 00:42:51.157121 | orchestrator | ok: [testbed-node-4] => { 2026-04-10 00:42:51.157128 | orchestrator |  "vgs_report": { 2026-04-10 00:42:51.157135 | orchestrator |  "vg": [] 2026-04-10 00:42:51.157142 | orchestrator |  } 2026-04-10 00:42:51.157149 | orchestrator | } 2026-04-10 00:42:51.157160 | orchestrator | 2026-04-10 00:42:51.157167 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-10 00:42:51.157174 | orchestrator | Friday 10 April 2026 00:42:50 +0000 (0:00:00.126) 0:00:40.911 ********** 2026-04-10 00:42:51.157181 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:51.157188 | orchestrator | 2026-04-10 00:42:51.157194 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-10 00:42:51.157201 | orchestrator | Friday 10 April 2026 00:42:50 +0000 (0:00:00.105) 0:00:41.016 ********** 2026-04-10 00:42:51.157207 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:51.157214 | orchestrator | 2026-04-10 00:42:51.157220 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-10 00:42:51.157227 | orchestrator | Friday 10 April 2026 00:42:50 +0000 (0:00:00.127) 0:00:41.144 ********** 2026-04-10 00:42:51.157234 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:51.157240 | orchestrator | 2026-04-10 00:42:51.157247 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-10 00:42:51.157254 | orchestrator | Friday 10 April 2026 00:42:51 +0000 (0:00:00.116) 0:00:41.260 ********** 2026-04-10 00:42:51.157260 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:51.157267 | orchestrator | 2026-04-10 00:42:51.157278 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-10 00:42:55.430484 | orchestrator | Friday 10 April 2026 00:42:51 +0000 (0:00:00.109) 0:00:41.370 ********** 2026-04-10 00:42:55.430612 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:55.430629 | orchestrator | 2026-04-10 00:42:55.430641 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-10 00:42:55.430652 | orchestrator | Friday 10 April 2026 00:42:51 +0000 (0:00:00.108) 0:00:41.479 ********** 2026-04-10 00:42:55.430661 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:55.430671 | orchestrator | 2026-04-10 00:42:55.430681 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-10 00:42:55.430691 | orchestrator | Friday 10 April 2026 00:42:51 +0000 (0:00:00.251) 0:00:41.730 ********** 2026-04-10 00:42:55.430700 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:55.430710 | orchestrator | 2026-04-10 00:42:55.430719 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-10 00:42:55.430729 | orchestrator | Friday 10 April 2026 00:42:51 +0000 (0:00:00.104) 0:00:41.835 ********** 2026-04-10 00:42:55.430738 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:55.430748 | orchestrator | 2026-04-10 00:42:55.430760 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-10 00:42:55.430776 | orchestrator | Friday 10 April 2026 00:42:51 +0000 (0:00:00.145) 0:00:41.981 ********** 2026-04-10 00:42:55.430791 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:55.430805 | orchestrator | 2026-04-10 00:42:55.430820 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-10 00:42:55.430837 | orchestrator | Friday 10 April 2026 00:42:51 +0000 (0:00:00.116) 0:00:42.097 ********** 2026-04-10 00:42:55.430852 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:55.430869 | orchestrator | 2026-04-10 00:42:55.430886 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-10 00:42:55.430903 | orchestrator | Friday 10 April 2026 00:42:51 +0000 (0:00:00.119) 0:00:42.216 ********** 2026-04-10 00:42:55.430919 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:55.430935 | orchestrator | 2026-04-10 00:42:55.430976 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-10 00:42:55.430993 | orchestrator | Friday 10 April 2026 00:42:52 +0000 (0:00:00.119) 0:00:42.336 ********** 2026-04-10 00:42:55.431008 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:55.431023 | orchestrator | 2026-04-10 00:42:55.431039 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-10 00:42:55.431055 | orchestrator | Friday 10 April 2026 00:42:52 +0000 (0:00:00.118) 0:00:42.454 ********** 2026-04-10 00:42:55.431072 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:55.431109 | orchestrator | 2026-04-10 00:42:55.431127 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-10 00:42:55.431143 | orchestrator | Friday 10 April 2026 00:42:52 +0000 (0:00:00.123) 0:00:42.578 ********** 2026-04-10 00:42:55.431159 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:55.431177 | orchestrator | 2026-04-10 00:42:55.431194 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-10 00:42:55.431205 | orchestrator | Friday 10 April 2026 00:42:52 +0000 (0:00:00.109) 0:00:42.688 ********** 2026-04-10 00:42:55.431222 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-465b2d07-90ab-575b-b156-9a24eede9b64', 'data_vg': 'ceph-465b2d07-90ab-575b-b156-9a24eede9b64'})  2026-04-10 00:42:55.431240 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a684d377-5ec1-594b-83a4-e92528b1ce81', 'data_vg': 'ceph-a684d377-5ec1-594b-83a4-e92528b1ce81'})  2026-04-10 00:42:55.431257 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:55.431273 | orchestrator | 2026-04-10 00:42:55.431290 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-10 00:42:55.431306 | orchestrator | Friday 10 April 2026 00:42:52 +0000 (0:00:00.136) 0:00:42.824 ********** 2026-04-10 00:42:55.431354 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-465b2d07-90ab-575b-b156-9a24eede9b64', 'data_vg': 'ceph-465b2d07-90ab-575b-b156-9a24eede9b64'})  2026-04-10 00:42:55.431372 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a684d377-5ec1-594b-83a4-e92528b1ce81', 'data_vg': 'ceph-a684d377-5ec1-594b-83a4-e92528b1ce81'})  2026-04-10 00:42:55.431389 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:55.431407 | orchestrator | 2026-04-10 00:42:55.431424 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-10 00:42:55.431439 | orchestrator | Friday 10 April 2026 00:42:52 +0000 (0:00:00.133) 0:00:42.957 ********** 2026-04-10 00:42:55.431454 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-465b2d07-90ab-575b-b156-9a24eede9b64', 'data_vg': 'ceph-465b2d07-90ab-575b-b156-9a24eede9b64'})  2026-04-10 00:42:55.431470 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a684d377-5ec1-594b-83a4-e92528b1ce81', 'data_vg': 'ceph-a684d377-5ec1-594b-83a4-e92528b1ce81'})  2026-04-10 00:42:55.431488 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:55.431505 | orchestrator | 2026-04-10 00:42:55.431522 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-10 00:42:55.431541 | orchestrator | Friday 10 April 2026 00:42:52 +0000 (0:00:00.132) 0:00:43.090 ********** 2026-04-10 00:42:55.431558 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-465b2d07-90ab-575b-b156-9a24eede9b64', 'data_vg': 'ceph-465b2d07-90ab-575b-b156-9a24eede9b64'})  2026-04-10 00:42:55.431576 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a684d377-5ec1-594b-83a4-e92528b1ce81', 'data_vg': 'ceph-a684d377-5ec1-594b-83a4-e92528b1ce81'})  2026-04-10 00:42:55.431592 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:55.431610 | orchestrator | 2026-04-10 00:42:55.431656 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-10 00:42:55.431673 | orchestrator | Friday 10 April 2026 00:42:53 +0000 (0:00:00.267) 0:00:43.357 ********** 2026-04-10 00:42:55.431691 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-465b2d07-90ab-575b-b156-9a24eede9b64', 'data_vg': 'ceph-465b2d07-90ab-575b-b156-9a24eede9b64'})  2026-04-10 00:42:55.431708 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a684d377-5ec1-594b-83a4-e92528b1ce81', 'data_vg': 'ceph-a684d377-5ec1-594b-83a4-e92528b1ce81'})  2026-04-10 00:42:55.431724 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:55.431740 | orchestrator | 2026-04-10 00:42:55.431755 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-10 00:42:55.431772 | orchestrator | Friday 10 April 2026 00:42:53 +0000 (0:00:00.158) 0:00:43.516 ********** 2026-04-10 00:42:55.431801 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-465b2d07-90ab-575b-b156-9a24eede9b64', 'data_vg': 'ceph-465b2d07-90ab-575b-b156-9a24eede9b64'})  2026-04-10 00:42:55.431820 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a684d377-5ec1-594b-83a4-e92528b1ce81', 'data_vg': 'ceph-a684d377-5ec1-594b-83a4-e92528b1ce81'})  2026-04-10 00:42:55.431837 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:55.431852 | orchestrator | 2026-04-10 00:42:55.431869 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-10 00:42:55.431887 | orchestrator | Friday 10 April 2026 00:42:53 +0000 (0:00:00.179) 0:00:43.695 ********** 2026-04-10 00:42:55.431903 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-465b2d07-90ab-575b-b156-9a24eede9b64', 'data_vg': 'ceph-465b2d07-90ab-575b-b156-9a24eede9b64'})  2026-04-10 00:42:55.431919 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a684d377-5ec1-594b-83a4-e92528b1ce81', 'data_vg': 'ceph-a684d377-5ec1-594b-83a4-e92528b1ce81'})  2026-04-10 00:42:55.431935 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:55.431945 | orchestrator | 2026-04-10 00:42:55.431955 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-10 00:42:55.431965 | orchestrator | Friday 10 April 2026 00:42:53 +0000 (0:00:00.160) 0:00:43.856 ********** 2026-04-10 00:42:55.431974 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-465b2d07-90ab-575b-b156-9a24eede9b64', 'data_vg': 'ceph-465b2d07-90ab-575b-b156-9a24eede9b64'})  2026-04-10 00:42:55.431983 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a684d377-5ec1-594b-83a4-e92528b1ce81', 'data_vg': 'ceph-a684d377-5ec1-594b-83a4-e92528b1ce81'})  2026-04-10 00:42:55.431993 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:55.432002 | orchestrator | 2026-04-10 00:42:55.432012 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-10 00:42:55.432021 | orchestrator | Friday 10 April 2026 00:42:53 +0000 (0:00:00.133) 0:00:43.990 ********** 2026-04-10 00:42:55.432031 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:42:55.432041 | orchestrator | 2026-04-10 00:42:55.432050 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-10 00:42:55.432060 | orchestrator | Friday 10 April 2026 00:42:54 +0000 (0:00:00.544) 0:00:44.535 ********** 2026-04-10 00:42:55.432069 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:42:55.432079 | orchestrator | 2026-04-10 00:42:55.432090 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-10 00:42:55.432110 | orchestrator | Friday 10 April 2026 00:42:54 +0000 (0:00:00.515) 0:00:45.051 ********** 2026-04-10 00:42:55.432134 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:42:55.432150 | orchestrator | 2026-04-10 00:42:55.432166 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-10 00:42:55.432182 | orchestrator | Friday 10 April 2026 00:42:54 +0000 (0:00:00.155) 0:00:45.206 ********** 2026-04-10 00:42:55.432195 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-465b2d07-90ab-575b-b156-9a24eede9b64', 'vg_name': 'ceph-465b2d07-90ab-575b-b156-9a24eede9b64'}) 2026-04-10 00:42:55.432211 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-a684d377-5ec1-594b-83a4-e92528b1ce81', 'vg_name': 'ceph-a684d377-5ec1-594b-83a4-e92528b1ce81'}) 2026-04-10 00:42:55.432225 | orchestrator | 2026-04-10 00:42:55.432241 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-10 00:42:55.432257 | orchestrator | Friday 10 April 2026 00:42:55 +0000 (0:00:00.186) 0:00:45.393 ********** 2026-04-10 00:42:55.432271 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-465b2d07-90ab-575b-b156-9a24eede9b64', 'data_vg': 'ceph-465b2d07-90ab-575b-b156-9a24eede9b64'})  2026-04-10 00:42:55.432284 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a684d377-5ec1-594b-83a4-e92528b1ce81', 'data_vg': 'ceph-a684d377-5ec1-594b-83a4-e92528b1ce81'})  2026-04-10 00:42:55.432298 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:42:55.432350 | orchestrator | 2026-04-10 00:42:55.432367 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-10 00:42:55.432382 | orchestrator | Friday 10 April 2026 00:42:55 +0000 (0:00:00.164) 0:00:45.557 ********** 2026-04-10 00:42:55.432397 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-465b2d07-90ab-575b-b156-9a24eede9b64', 'data_vg': 'ceph-465b2d07-90ab-575b-b156-9a24eede9b64'})  2026-04-10 00:42:55.432427 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a684d377-5ec1-594b-83a4-e92528b1ce81', 'data_vg': 'ceph-a684d377-5ec1-594b-83a4-e92528b1ce81'})  2026-04-10 00:43:01.441487 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:43:01.441605 | orchestrator | 2026-04-10 00:43:01.441637 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-10 00:43:01.441651 | orchestrator | Friday 10 April 2026 00:42:55 +0000 (0:00:00.188) 0:00:45.746 ********** 2026-04-10 00:43:01.441662 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-465b2d07-90ab-575b-b156-9a24eede9b64', 'data_vg': 'ceph-465b2d07-90ab-575b-b156-9a24eede9b64'})  2026-04-10 00:43:01.441676 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a684d377-5ec1-594b-83a4-e92528b1ce81', 'data_vg': 'ceph-a684d377-5ec1-594b-83a4-e92528b1ce81'})  2026-04-10 00:43:01.441687 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:43:01.441699 | orchestrator | 2026-04-10 00:43:01.441710 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-10 00:43:01.441722 | orchestrator | Friday 10 April 2026 00:42:55 +0000 (0:00:00.162) 0:00:45.908 ********** 2026-04-10 00:43:01.441733 | orchestrator | ok: [testbed-node-4] => { 2026-04-10 00:43:01.441745 | orchestrator |  "lvm_report": { 2026-04-10 00:43:01.441757 | orchestrator |  "lv": [ 2026-04-10 00:43:01.441798 | orchestrator |  { 2026-04-10 00:43:01.441810 | orchestrator |  "lv_name": "osd-block-465b2d07-90ab-575b-b156-9a24eede9b64", 2026-04-10 00:43:01.441822 | orchestrator |  "vg_name": "ceph-465b2d07-90ab-575b-b156-9a24eede9b64" 2026-04-10 00:43:01.441834 | orchestrator |  }, 2026-04-10 00:43:01.441845 | orchestrator |  { 2026-04-10 00:43:01.441856 | orchestrator |  "lv_name": "osd-block-a684d377-5ec1-594b-83a4-e92528b1ce81", 2026-04-10 00:43:01.441867 | orchestrator |  "vg_name": "ceph-a684d377-5ec1-594b-83a4-e92528b1ce81" 2026-04-10 00:43:01.441878 | orchestrator |  } 2026-04-10 00:43:01.441889 | orchestrator |  ], 2026-04-10 00:43:01.441900 | orchestrator |  "pv": [ 2026-04-10 00:43:01.441911 | orchestrator |  { 2026-04-10 00:43:01.441922 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-10 00:43:01.441933 | orchestrator |  "vg_name": "ceph-465b2d07-90ab-575b-b156-9a24eede9b64" 2026-04-10 00:43:01.441944 | orchestrator |  }, 2026-04-10 00:43:01.441955 | orchestrator |  { 2026-04-10 00:43:01.441966 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-10 00:43:01.441977 | orchestrator |  "vg_name": "ceph-a684d377-5ec1-594b-83a4-e92528b1ce81" 2026-04-10 00:43:01.441989 | orchestrator |  } 2026-04-10 00:43:01.442000 | orchestrator |  ] 2026-04-10 00:43:01.442011 | orchestrator |  } 2026-04-10 00:43:01.442106 | orchestrator | } 2026-04-10 00:43:01.442127 | orchestrator | 2026-04-10 00:43:01.442139 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-10 00:43:01.442150 | orchestrator | 2026-04-10 00:43:01.442161 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-10 00:43:01.442186 | orchestrator | Friday 10 April 2026 00:42:56 +0000 (0:00:00.692) 0:00:46.600 ********** 2026-04-10 00:43:01.442198 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-10 00:43:01.442210 | orchestrator | 2026-04-10 00:43:01.442221 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-10 00:43:01.442231 | orchestrator | Friday 10 April 2026 00:42:56 +0000 (0:00:00.257) 0:00:46.858 ********** 2026-04-10 00:43:01.442277 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:43:01.442289 | orchestrator | 2026-04-10 00:43:01.442300 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:43:01.442373 | orchestrator | Friday 10 April 2026 00:42:56 +0000 (0:00:00.265) 0:00:47.124 ********** 2026-04-10 00:43:01.442388 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-10 00:43:01.442399 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-10 00:43:01.442409 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-10 00:43:01.442425 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-10 00:43:01.442436 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-10 00:43:01.442447 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-10 00:43:01.442458 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-10 00:43:01.442469 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-10 00:43:01.442479 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-10 00:43:01.442490 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-10 00:43:01.442501 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-10 00:43:01.442511 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-10 00:43:01.442522 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-10 00:43:01.442533 | orchestrator | 2026-04-10 00:43:01.442543 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:43:01.442554 | orchestrator | Friday 10 April 2026 00:42:57 +0000 (0:00:00.445) 0:00:47.569 ********** 2026-04-10 00:43:01.442565 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:01.442576 | orchestrator | 2026-04-10 00:43:01.442587 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:43:01.442597 | orchestrator | Friday 10 April 2026 00:42:57 +0000 (0:00:00.200) 0:00:47.770 ********** 2026-04-10 00:43:01.442608 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:01.442619 | orchestrator | 2026-04-10 00:43:01.442630 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:43:01.442661 | orchestrator | Friday 10 April 2026 00:42:57 +0000 (0:00:00.200) 0:00:47.971 ********** 2026-04-10 00:43:01.442673 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:01.442684 | orchestrator | 2026-04-10 00:43:01.442694 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:43:01.442705 | orchestrator | Friday 10 April 2026 00:42:57 +0000 (0:00:00.204) 0:00:48.176 ********** 2026-04-10 00:43:01.442716 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:01.442726 | orchestrator | 2026-04-10 00:43:01.442756 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:43:01.442767 | orchestrator | Friday 10 April 2026 00:42:58 +0000 (0:00:00.216) 0:00:48.392 ********** 2026-04-10 00:43:01.442793 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:01.442804 | orchestrator | 2026-04-10 00:43:01.442831 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:43:01.442856 | orchestrator | Friday 10 April 2026 00:42:58 +0000 (0:00:00.203) 0:00:48.596 ********** 2026-04-10 00:43:01.442867 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:01.442878 | orchestrator | 2026-04-10 00:43:01.442889 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:43:01.442900 | orchestrator | Friday 10 April 2026 00:42:58 +0000 (0:00:00.445) 0:00:49.042 ********** 2026-04-10 00:43:01.442911 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:01.442932 | orchestrator | 2026-04-10 00:43:01.442943 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:43:01.442954 | orchestrator | Friday 10 April 2026 00:42:59 +0000 (0:00:00.215) 0:00:49.257 ********** 2026-04-10 00:43:01.442965 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:01.442976 | orchestrator | 2026-04-10 00:43:01.442987 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:43:01.442998 | orchestrator | Friday 10 April 2026 00:42:59 +0000 (0:00:00.184) 0:00:49.441 ********** 2026-04-10 00:43:01.443008 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21) 2026-04-10 00:43:01.443020 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21) 2026-04-10 00:43:01.443031 | orchestrator | 2026-04-10 00:43:01.443042 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:43:01.443052 | orchestrator | Friday 10 April 2026 00:42:59 +0000 (0:00:00.419) 0:00:49.861 ********** 2026-04-10 00:43:01.443063 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a4e1216f-fa74-4126-b451-31b29817bdec) 2026-04-10 00:43:01.443074 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a4e1216f-fa74-4126-b451-31b29817bdec) 2026-04-10 00:43:01.443085 | orchestrator | 2026-04-10 00:43:01.443096 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:43:01.443107 | orchestrator | Friday 10 April 2026 00:43:00 +0000 (0:00:00.390) 0:00:50.252 ********** 2026-04-10 00:43:01.443117 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9b5f2139-44b1-4420-a83a-35d7b8e164cf) 2026-04-10 00:43:01.443128 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9b5f2139-44b1-4420-a83a-35d7b8e164cf) 2026-04-10 00:43:01.443139 | orchestrator | 2026-04-10 00:43:01.443150 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:43:01.443160 | orchestrator | Friday 10 April 2026 00:43:00 +0000 (0:00:00.426) 0:00:50.679 ********** 2026-04-10 00:43:01.443171 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_433cfae2-239d-480b-959d-b8cd36270ab8) 2026-04-10 00:43:01.443195 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_433cfae2-239d-480b-959d-b8cd36270ab8) 2026-04-10 00:43:01.443206 | orchestrator | 2026-04-10 00:43:01.443217 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-10 00:43:01.443228 | orchestrator | Friday 10 April 2026 00:43:00 +0000 (0:00:00.384) 0:00:51.063 ********** 2026-04-10 00:43:01.443238 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-10 00:43:01.443249 | orchestrator | 2026-04-10 00:43:01.443260 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:43:01.443271 | orchestrator | Friday 10 April 2026 00:43:01 +0000 (0:00:00.291) 0:00:51.355 ********** 2026-04-10 00:43:01.443281 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-10 00:43:01.443292 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-10 00:43:01.443303 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-10 00:43:01.443343 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-10 00:43:01.443361 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-10 00:43:01.443372 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-10 00:43:01.443424 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-10 00:43:01.443436 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-10 00:43:01.443461 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-10 00:43:01.443480 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-10 00:43:01.443492 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-10 00:43:01.443511 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-10 00:43:09.766473 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-10 00:43:09.766571 | orchestrator | 2026-04-10 00:43:09.766583 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:43:09.766592 | orchestrator | Friday 10 April 2026 00:43:01 +0000 (0:00:00.374) 0:00:51.729 ********** 2026-04-10 00:43:09.766599 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:09.766608 | orchestrator | 2026-04-10 00:43:09.766616 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:43:09.766624 | orchestrator | Friday 10 April 2026 00:43:01 +0000 (0:00:00.173) 0:00:51.903 ********** 2026-04-10 00:43:09.766631 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:09.766638 | orchestrator | 2026-04-10 00:43:09.766646 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:43:09.766654 | orchestrator | Friday 10 April 2026 00:43:01 +0000 (0:00:00.168) 0:00:52.072 ********** 2026-04-10 00:43:09.766662 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:09.766669 | orchestrator | 2026-04-10 00:43:09.766676 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:43:09.766698 | orchestrator | Friday 10 April 2026 00:43:02 +0000 (0:00:00.530) 0:00:52.602 ********** 2026-04-10 00:43:09.766705 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:09.766713 | orchestrator | 2026-04-10 00:43:09.766720 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:43:09.766728 | orchestrator | Friday 10 April 2026 00:43:02 +0000 (0:00:00.176) 0:00:52.779 ********** 2026-04-10 00:43:09.766735 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:09.766742 | orchestrator | 2026-04-10 00:43:09.766750 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:43:09.766757 | orchestrator | Friday 10 April 2026 00:43:02 +0000 (0:00:00.184) 0:00:52.963 ********** 2026-04-10 00:43:09.766764 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:09.766772 | orchestrator | 2026-04-10 00:43:09.766779 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:43:09.766787 | orchestrator | Friday 10 April 2026 00:43:02 +0000 (0:00:00.209) 0:00:53.173 ********** 2026-04-10 00:43:09.766794 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:09.766801 | orchestrator | 2026-04-10 00:43:09.766808 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:43:09.766816 | orchestrator | Friday 10 April 2026 00:43:03 +0000 (0:00:00.213) 0:00:53.387 ********** 2026-04-10 00:43:09.766823 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:09.766831 | orchestrator | 2026-04-10 00:43:09.766838 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:43:09.766846 | orchestrator | Friday 10 April 2026 00:43:03 +0000 (0:00:00.216) 0:00:53.603 ********** 2026-04-10 00:43:09.766853 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-10 00:43:09.766861 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-10 00:43:09.766869 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-10 00:43:09.766877 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-10 00:43:09.766888 | orchestrator | 2026-04-10 00:43:09.766900 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:43:09.766919 | orchestrator | Friday 10 April 2026 00:43:04 +0000 (0:00:00.638) 0:00:54.242 ********** 2026-04-10 00:43:09.766935 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:09.766945 | orchestrator | 2026-04-10 00:43:09.766956 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:43:09.766988 | orchestrator | Friday 10 April 2026 00:43:04 +0000 (0:00:00.209) 0:00:54.451 ********** 2026-04-10 00:43:09.767001 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:09.767011 | orchestrator | 2026-04-10 00:43:09.767023 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:43:09.767035 | orchestrator | Friday 10 April 2026 00:43:04 +0000 (0:00:00.163) 0:00:54.615 ********** 2026-04-10 00:43:09.767046 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:09.767060 | orchestrator | 2026-04-10 00:43:09.767072 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-10 00:43:09.767086 | orchestrator | Friday 10 April 2026 00:43:04 +0000 (0:00:00.168) 0:00:54.784 ********** 2026-04-10 00:43:09.767098 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:09.767111 | orchestrator | 2026-04-10 00:43:09.767121 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-10 00:43:09.767130 | orchestrator | Friday 10 April 2026 00:43:04 +0000 (0:00:00.189) 0:00:54.974 ********** 2026-04-10 00:43:09.767138 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:09.767147 | orchestrator | 2026-04-10 00:43:09.767154 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-10 00:43:09.767161 | orchestrator | Friday 10 April 2026 00:43:05 +0000 (0:00:00.252) 0:00:55.226 ********** 2026-04-10 00:43:09.767169 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '09201c46-e11a-5302-956e-912d17e7f9de'}}) 2026-04-10 00:43:09.767176 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0863171e-1302-565f-bee5-d18b6804a785'}}) 2026-04-10 00:43:09.767183 | orchestrator | 2026-04-10 00:43:09.767191 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-10 00:43:09.767199 | orchestrator | Friday 10 April 2026 00:43:05 +0000 (0:00:00.189) 0:00:55.415 ********** 2026-04-10 00:43:09.767208 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-09201c46-e11a-5302-956e-912d17e7f9de', 'data_vg': 'ceph-09201c46-e11a-5302-956e-912d17e7f9de'}) 2026-04-10 00:43:09.767217 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-0863171e-1302-565f-bee5-d18b6804a785', 'data_vg': 'ceph-0863171e-1302-565f-bee5-d18b6804a785'}) 2026-04-10 00:43:09.767224 | orchestrator | 2026-04-10 00:43:09.767231 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-10 00:43:09.767255 | orchestrator | Friday 10 April 2026 00:43:06 +0000 (0:00:01.791) 0:00:57.207 ********** 2026-04-10 00:43:09.767263 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-09201c46-e11a-5302-956e-912d17e7f9de', 'data_vg': 'ceph-09201c46-e11a-5302-956e-912d17e7f9de'})  2026-04-10 00:43:09.767272 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0863171e-1302-565f-bee5-d18b6804a785', 'data_vg': 'ceph-0863171e-1302-565f-bee5-d18b6804a785'})  2026-04-10 00:43:09.767279 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:09.767286 | orchestrator | 2026-04-10 00:43:09.767294 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-10 00:43:09.767302 | orchestrator | Friday 10 April 2026 00:43:07 +0000 (0:00:00.138) 0:00:57.345 ********** 2026-04-10 00:43:09.767340 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-09201c46-e11a-5302-956e-912d17e7f9de', 'data_vg': 'ceph-09201c46-e11a-5302-956e-912d17e7f9de'}) 2026-04-10 00:43:09.767366 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-0863171e-1302-565f-bee5-d18b6804a785', 'data_vg': 'ceph-0863171e-1302-565f-bee5-d18b6804a785'}) 2026-04-10 00:43:09.767378 | orchestrator | 2026-04-10 00:43:09.767391 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-10 00:43:09.767403 | orchestrator | Friday 10 April 2026 00:43:08 +0000 (0:00:01.291) 0:00:58.637 ********** 2026-04-10 00:43:09.767415 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-09201c46-e11a-5302-956e-912d17e7f9de', 'data_vg': 'ceph-09201c46-e11a-5302-956e-912d17e7f9de'})  2026-04-10 00:43:09.767437 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0863171e-1302-565f-bee5-d18b6804a785', 'data_vg': 'ceph-0863171e-1302-565f-bee5-d18b6804a785'})  2026-04-10 00:43:09.767450 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:09.767462 | orchestrator | 2026-04-10 00:43:09.767475 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-10 00:43:09.767482 | orchestrator | Friday 10 April 2026 00:43:08 +0000 (0:00:00.156) 0:00:58.794 ********** 2026-04-10 00:43:09.767489 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:09.767497 | orchestrator | 2026-04-10 00:43:09.767504 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-10 00:43:09.767511 | orchestrator | Friday 10 April 2026 00:43:08 +0000 (0:00:00.148) 0:00:58.942 ********** 2026-04-10 00:43:09.767518 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-09201c46-e11a-5302-956e-912d17e7f9de', 'data_vg': 'ceph-09201c46-e11a-5302-956e-912d17e7f9de'})  2026-04-10 00:43:09.767526 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0863171e-1302-565f-bee5-d18b6804a785', 'data_vg': 'ceph-0863171e-1302-565f-bee5-d18b6804a785'})  2026-04-10 00:43:09.767533 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:09.767540 | orchestrator | 2026-04-10 00:43:09.767547 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-10 00:43:09.767555 | orchestrator | Friday 10 April 2026 00:43:08 +0000 (0:00:00.189) 0:00:59.132 ********** 2026-04-10 00:43:09.767562 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:09.767569 | orchestrator | 2026-04-10 00:43:09.767576 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-10 00:43:09.767584 | orchestrator | Friday 10 April 2026 00:43:09 +0000 (0:00:00.151) 0:00:59.284 ********** 2026-04-10 00:43:09.767591 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-09201c46-e11a-5302-956e-912d17e7f9de', 'data_vg': 'ceph-09201c46-e11a-5302-956e-912d17e7f9de'})  2026-04-10 00:43:09.767598 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0863171e-1302-565f-bee5-d18b6804a785', 'data_vg': 'ceph-0863171e-1302-565f-bee5-d18b6804a785'})  2026-04-10 00:43:09.767605 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:09.767613 | orchestrator | 2026-04-10 00:43:09.767620 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-10 00:43:09.767627 | orchestrator | Friday 10 April 2026 00:43:09 +0000 (0:00:00.175) 0:00:59.459 ********** 2026-04-10 00:43:09.767634 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:09.767641 | orchestrator | 2026-04-10 00:43:09.767648 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-10 00:43:09.767656 | orchestrator | Friday 10 April 2026 00:43:09 +0000 (0:00:00.136) 0:00:59.596 ********** 2026-04-10 00:43:09.767667 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-09201c46-e11a-5302-956e-912d17e7f9de', 'data_vg': 'ceph-09201c46-e11a-5302-956e-912d17e7f9de'})  2026-04-10 00:43:09.767679 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0863171e-1302-565f-bee5-d18b6804a785', 'data_vg': 'ceph-0863171e-1302-565f-bee5-d18b6804a785'})  2026-04-10 00:43:09.767691 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:09.767703 | orchestrator | 2026-04-10 00:43:09.767715 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-10 00:43:09.767727 | orchestrator | Friday 10 April 2026 00:43:09 +0000 (0:00:00.159) 0:00:59.755 ********** 2026-04-10 00:43:09.767740 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:43:09.767753 | orchestrator | 2026-04-10 00:43:09.767765 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-10 00:43:09.767777 | orchestrator | Friday 10 April 2026 00:43:09 +0000 (0:00:00.152) 0:00:59.907 ********** 2026-04-10 00:43:09.767799 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-09201c46-e11a-5302-956e-912d17e7f9de', 'data_vg': 'ceph-09201c46-e11a-5302-956e-912d17e7f9de'})  2026-04-10 00:43:16.386404 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0863171e-1302-565f-bee5-d18b6804a785', 'data_vg': 'ceph-0863171e-1302-565f-bee5-d18b6804a785'})  2026-04-10 00:43:16.386515 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:16.386533 | orchestrator | 2026-04-10 00:43:16.386546 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-10 00:43:16.386560 | orchestrator | Friday 10 April 2026 00:43:10 +0000 (0:00:00.419) 0:01:00.327 ********** 2026-04-10 00:43:16.386571 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-09201c46-e11a-5302-956e-912d17e7f9de', 'data_vg': 'ceph-09201c46-e11a-5302-956e-912d17e7f9de'})  2026-04-10 00:43:16.386583 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0863171e-1302-565f-bee5-d18b6804a785', 'data_vg': 'ceph-0863171e-1302-565f-bee5-d18b6804a785'})  2026-04-10 00:43:16.386594 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:16.386605 | orchestrator | 2026-04-10 00:43:16.386633 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-10 00:43:16.386644 | orchestrator | Friday 10 April 2026 00:43:10 +0000 (0:00:00.164) 0:01:00.491 ********** 2026-04-10 00:43:16.386655 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-09201c46-e11a-5302-956e-912d17e7f9de', 'data_vg': 'ceph-09201c46-e11a-5302-956e-912d17e7f9de'})  2026-04-10 00:43:16.386667 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0863171e-1302-565f-bee5-d18b6804a785', 'data_vg': 'ceph-0863171e-1302-565f-bee5-d18b6804a785'})  2026-04-10 00:43:16.386678 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:16.386688 | orchestrator | 2026-04-10 00:43:16.386699 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-10 00:43:16.386710 | orchestrator | Friday 10 April 2026 00:43:10 +0000 (0:00:00.144) 0:01:00.636 ********** 2026-04-10 00:43:16.386721 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:16.386732 | orchestrator | 2026-04-10 00:43:16.386743 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-10 00:43:16.386754 | orchestrator | Friday 10 April 2026 00:43:10 +0000 (0:00:00.136) 0:01:00.773 ********** 2026-04-10 00:43:16.386765 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:16.386776 | orchestrator | 2026-04-10 00:43:16.386787 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-10 00:43:16.386798 | orchestrator | Friday 10 April 2026 00:43:10 +0000 (0:00:00.140) 0:01:00.913 ********** 2026-04-10 00:43:16.386809 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:16.386821 | orchestrator | 2026-04-10 00:43:16.386832 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-10 00:43:16.386843 | orchestrator | Friday 10 April 2026 00:43:10 +0000 (0:00:00.132) 0:01:01.046 ********** 2026-04-10 00:43:16.386854 | orchestrator | ok: [testbed-node-5] => { 2026-04-10 00:43:16.386866 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-10 00:43:16.386877 | orchestrator | } 2026-04-10 00:43:16.386889 | orchestrator | 2026-04-10 00:43:16.386901 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-10 00:43:16.386914 | orchestrator | Friday 10 April 2026 00:43:10 +0000 (0:00:00.139) 0:01:01.185 ********** 2026-04-10 00:43:16.386926 | orchestrator | ok: [testbed-node-5] => { 2026-04-10 00:43:16.386938 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-10 00:43:16.386951 | orchestrator | } 2026-04-10 00:43:16.386964 | orchestrator | 2026-04-10 00:43:16.386976 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-10 00:43:16.386987 | orchestrator | Friday 10 April 2026 00:43:11 +0000 (0:00:00.149) 0:01:01.334 ********** 2026-04-10 00:43:16.386998 | orchestrator | ok: [testbed-node-5] => { 2026-04-10 00:43:16.387009 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-10 00:43:16.387020 | orchestrator | } 2026-04-10 00:43:16.387031 | orchestrator | 2026-04-10 00:43:16.387042 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-10 00:43:16.387053 | orchestrator | Friday 10 April 2026 00:43:11 +0000 (0:00:00.151) 0:01:01.486 ********** 2026-04-10 00:43:16.387084 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:43:16.387096 | orchestrator | 2026-04-10 00:43:16.387107 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-10 00:43:16.387118 | orchestrator | Friday 10 April 2026 00:43:11 +0000 (0:00:00.543) 0:01:02.030 ********** 2026-04-10 00:43:16.387129 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:43:16.387140 | orchestrator | 2026-04-10 00:43:16.387150 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-10 00:43:16.387161 | orchestrator | Friday 10 April 2026 00:43:12 +0000 (0:00:00.521) 0:01:02.551 ********** 2026-04-10 00:43:16.387172 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:43:16.387183 | orchestrator | 2026-04-10 00:43:16.387194 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-10 00:43:16.387204 | orchestrator | Friday 10 April 2026 00:43:12 +0000 (0:00:00.497) 0:01:03.049 ********** 2026-04-10 00:43:16.387215 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:43:16.387226 | orchestrator | 2026-04-10 00:43:16.387236 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-10 00:43:16.387247 | orchestrator | Friday 10 April 2026 00:43:13 +0000 (0:00:00.370) 0:01:03.419 ********** 2026-04-10 00:43:16.387257 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:16.387268 | orchestrator | 2026-04-10 00:43:16.387279 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-10 00:43:16.387290 | orchestrator | Friday 10 April 2026 00:43:13 +0000 (0:00:00.162) 0:01:03.582 ********** 2026-04-10 00:43:16.387300 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:16.387311 | orchestrator | 2026-04-10 00:43:16.387361 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-10 00:43:16.387373 | orchestrator | Friday 10 April 2026 00:43:13 +0000 (0:00:00.113) 0:01:03.696 ********** 2026-04-10 00:43:16.387384 | orchestrator | ok: [testbed-node-5] => { 2026-04-10 00:43:16.387395 | orchestrator |  "vgs_report": { 2026-04-10 00:43:16.387407 | orchestrator |  "vg": [] 2026-04-10 00:43:16.387438 | orchestrator |  } 2026-04-10 00:43:16.387451 | orchestrator | } 2026-04-10 00:43:16.387462 | orchestrator | 2026-04-10 00:43:16.387473 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-10 00:43:16.387484 | orchestrator | Friday 10 April 2026 00:43:13 +0000 (0:00:00.135) 0:01:03.831 ********** 2026-04-10 00:43:16.387495 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:16.387506 | orchestrator | 2026-04-10 00:43:16.387517 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-10 00:43:16.387528 | orchestrator | Friday 10 April 2026 00:43:13 +0000 (0:00:00.123) 0:01:03.954 ********** 2026-04-10 00:43:16.387539 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:16.387549 | orchestrator | 2026-04-10 00:43:16.387560 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-10 00:43:16.387571 | orchestrator | Friday 10 April 2026 00:43:13 +0000 (0:00:00.169) 0:01:04.124 ********** 2026-04-10 00:43:16.387581 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:16.387592 | orchestrator | 2026-04-10 00:43:16.387603 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-10 00:43:16.387614 | orchestrator | Friday 10 April 2026 00:43:14 +0000 (0:00:00.155) 0:01:04.280 ********** 2026-04-10 00:43:16.387625 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:16.387636 | orchestrator | 2026-04-10 00:43:16.387646 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-10 00:43:16.387657 | orchestrator | Friday 10 April 2026 00:43:14 +0000 (0:00:00.155) 0:01:04.435 ********** 2026-04-10 00:43:16.387668 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:16.387678 | orchestrator | 2026-04-10 00:43:16.387689 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-10 00:43:16.387700 | orchestrator | Friday 10 April 2026 00:43:14 +0000 (0:00:00.188) 0:01:04.624 ********** 2026-04-10 00:43:16.387711 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:16.387734 | orchestrator | 2026-04-10 00:43:16.387745 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-10 00:43:16.387756 | orchestrator | Friday 10 April 2026 00:43:14 +0000 (0:00:00.143) 0:01:04.767 ********** 2026-04-10 00:43:16.387767 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:16.387778 | orchestrator | 2026-04-10 00:43:16.387788 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-10 00:43:16.387799 | orchestrator | Friday 10 April 2026 00:43:14 +0000 (0:00:00.153) 0:01:04.921 ********** 2026-04-10 00:43:16.387810 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:16.387820 | orchestrator | 2026-04-10 00:43:16.387831 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-10 00:43:16.387842 | orchestrator | Friday 10 April 2026 00:43:14 +0000 (0:00:00.160) 0:01:05.081 ********** 2026-04-10 00:43:16.387853 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:16.387864 | orchestrator | 2026-04-10 00:43:16.387875 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-10 00:43:16.387886 | orchestrator | Friday 10 April 2026 00:43:15 +0000 (0:00:00.343) 0:01:05.424 ********** 2026-04-10 00:43:16.387896 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:16.387907 | orchestrator | 2026-04-10 00:43:16.387918 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-10 00:43:16.387929 | orchestrator | Friday 10 April 2026 00:43:15 +0000 (0:00:00.145) 0:01:05.570 ********** 2026-04-10 00:43:16.387940 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:16.387950 | orchestrator | 2026-04-10 00:43:16.387961 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-10 00:43:16.387972 | orchestrator | Friday 10 April 2026 00:43:15 +0000 (0:00:00.142) 0:01:05.713 ********** 2026-04-10 00:43:16.387983 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:16.387993 | orchestrator | 2026-04-10 00:43:16.388004 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-10 00:43:16.388015 | orchestrator | Friday 10 April 2026 00:43:15 +0000 (0:00:00.139) 0:01:05.852 ********** 2026-04-10 00:43:16.388026 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:16.388036 | orchestrator | 2026-04-10 00:43:16.388047 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-10 00:43:16.388058 | orchestrator | Friday 10 April 2026 00:43:15 +0000 (0:00:00.143) 0:01:05.996 ********** 2026-04-10 00:43:16.388069 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:16.388080 | orchestrator | 2026-04-10 00:43:16.388091 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-10 00:43:16.388102 | orchestrator | Friday 10 April 2026 00:43:15 +0000 (0:00:00.145) 0:01:06.141 ********** 2026-04-10 00:43:16.388113 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-09201c46-e11a-5302-956e-912d17e7f9de', 'data_vg': 'ceph-09201c46-e11a-5302-956e-912d17e7f9de'})  2026-04-10 00:43:16.388124 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0863171e-1302-565f-bee5-d18b6804a785', 'data_vg': 'ceph-0863171e-1302-565f-bee5-d18b6804a785'})  2026-04-10 00:43:16.388135 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:16.388145 | orchestrator | 2026-04-10 00:43:16.388156 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-10 00:43:16.388167 | orchestrator | Friday 10 April 2026 00:43:16 +0000 (0:00:00.184) 0:01:06.325 ********** 2026-04-10 00:43:16.388187 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-09201c46-e11a-5302-956e-912d17e7f9de', 'data_vg': 'ceph-09201c46-e11a-5302-956e-912d17e7f9de'})  2026-04-10 00:43:16.388199 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0863171e-1302-565f-bee5-d18b6804a785', 'data_vg': 'ceph-0863171e-1302-565f-bee5-d18b6804a785'})  2026-04-10 00:43:16.388210 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:16.388220 | orchestrator | 2026-04-10 00:43:16.388232 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-10 00:43:16.388250 | orchestrator | Friday 10 April 2026 00:43:16 +0000 (0:00:00.202) 0:01:06.528 ********** 2026-04-10 00:43:16.388268 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-09201c46-e11a-5302-956e-912d17e7f9de', 'data_vg': 'ceph-09201c46-e11a-5302-956e-912d17e7f9de'})  2026-04-10 00:43:19.720092 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0863171e-1302-565f-bee5-d18b6804a785', 'data_vg': 'ceph-0863171e-1302-565f-bee5-d18b6804a785'})  2026-04-10 00:43:19.720196 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:19.720212 | orchestrator | 2026-04-10 00:43:19.720225 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-10 00:43:19.720239 | orchestrator | Friday 10 April 2026 00:43:16 +0000 (0:00:00.163) 0:01:06.691 ********** 2026-04-10 00:43:19.720250 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-09201c46-e11a-5302-956e-912d17e7f9de', 'data_vg': 'ceph-09201c46-e11a-5302-956e-912d17e7f9de'})  2026-04-10 00:43:19.720278 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0863171e-1302-565f-bee5-d18b6804a785', 'data_vg': 'ceph-0863171e-1302-565f-bee5-d18b6804a785'})  2026-04-10 00:43:19.720290 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:19.720300 | orchestrator | 2026-04-10 00:43:19.720312 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-10 00:43:19.720390 | orchestrator | Friday 10 April 2026 00:43:16 +0000 (0:00:00.167) 0:01:06.859 ********** 2026-04-10 00:43:19.720403 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-09201c46-e11a-5302-956e-912d17e7f9de', 'data_vg': 'ceph-09201c46-e11a-5302-956e-912d17e7f9de'})  2026-04-10 00:43:19.720414 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0863171e-1302-565f-bee5-d18b6804a785', 'data_vg': 'ceph-0863171e-1302-565f-bee5-d18b6804a785'})  2026-04-10 00:43:19.720425 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:19.720436 | orchestrator | 2026-04-10 00:43:19.720448 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-10 00:43:19.720459 | orchestrator | Friday 10 April 2026 00:43:16 +0000 (0:00:00.151) 0:01:07.010 ********** 2026-04-10 00:43:19.720470 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-09201c46-e11a-5302-956e-912d17e7f9de', 'data_vg': 'ceph-09201c46-e11a-5302-956e-912d17e7f9de'})  2026-04-10 00:43:19.720481 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0863171e-1302-565f-bee5-d18b6804a785', 'data_vg': 'ceph-0863171e-1302-565f-bee5-d18b6804a785'})  2026-04-10 00:43:19.720492 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:19.720503 | orchestrator | 2026-04-10 00:43:19.720514 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-10 00:43:19.720525 | orchestrator | Friday 10 April 2026 00:43:16 +0000 (0:00:00.157) 0:01:07.168 ********** 2026-04-10 00:43:19.720536 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-09201c46-e11a-5302-956e-912d17e7f9de', 'data_vg': 'ceph-09201c46-e11a-5302-956e-912d17e7f9de'})  2026-04-10 00:43:19.720547 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0863171e-1302-565f-bee5-d18b6804a785', 'data_vg': 'ceph-0863171e-1302-565f-bee5-d18b6804a785'})  2026-04-10 00:43:19.720558 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:19.720568 | orchestrator | 2026-04-10 00:43:19.720579 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-10 00:43:19.720590 | orchestrator | Friday 10 April 2026 00:43:17 +0000 (0:00:00.502) 0:01:07.670 ********** 2026-04-10 00:43:19.720601 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-09201c46-e11a-5302-956e-912d17e7f9de', 'data_vg': 'ceph-09201c46-e11a-5302-956e-912d17e7f9de'})  2026-04-10 00:43:19.720612 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0863171e-1302-565f-bee5-d18b6804a785', 'data_vg': 'ceph-0863171e-1302-565f-bee5-d18b6804a785'})  2026-04-10 00:43:19.720624 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:19.720660 | orchestrator | 2026-04-10 00:43:19.720673 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-10 00:43:19.720686 | orchestrator | Friday 10 April 2026 00:43:17 +0000 (0:00:00.173) 0:01:07.844 ********** 2026-04-10 00:43:19.720699 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:43:19.720712 | orchestrator | 2026-04-10 00:43:19.720725 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-10 00:43:19.720737 | orchestrator | Friday 10 April 2026 00:43:18 +0000 (0:00:00.530) 0:01:08.374 ********** 2026-04-10 00:43:19.720749 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:43:19.720761 | orchestrator | 2026-04-10 00:43:19.720773 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-10 00:43:19.720786 | orchestrator | Friday 10 April 2026 00:43:18 +0000 (0:00:00.544) 0:01:08.919 ********** 2026-04-10 00:43:19.720798 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:43:19.720810 | orchestrator | 2026-04-10 00:43:19.720822 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-10 00:43:19.720836 | orchestrator | Friday 10 April 2026 00:43:18 +0000 (0:00:00.151) 0:01:09.070 ********** 2026-04-10 00:43:19.720848 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-0863171e-1302-565f-bee5-d18b6804a785', 'vg_name': 'ceph-0863171e-1302-565f-bee5-d18b6804a785'}) 2026-04-10 00:43:19.720862 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-09201c46-e11a-5302-956e-912d17e7f9de', 'vg_name': 'ceph-09201c46-e11a-5302-956e-912d17e7f9de'}) 2026-04-10 00:43:19.720875 | orchestrator | 2026-04-10 00:43:19.720887 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-10 00:43:19.720899 | orchestrator | Friday 10 April 2026 00:43:19 +0000 (0:00:00.162) 0:01:09.232 ********** 2026-04-10 00:43:19.720932 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-09201c46-e11a-5302-956e-912d17e7f9de', 'data_vg': 'ceph-09201c46-e11a-5302-956e-912d17e7f9de'})  2026-04-10 00:43:19.720945 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0863171e-1302-565f-bee5-d18b6804a785', 'data_vg': 'ceph-0863171e-1302-565f-bee5-d18b6804a785'})  2026-04-10 00:43:19.720958 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:19.720969 | orchestrator | 2026-04-10 00:43:19.720980 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-10 00:43:19.720991 | orchestrator | Friday 10 April 2026 00:43:19 +0000 (0:00:00.192) 0:01:09.425 ********** 2026-04-10 00:43:19.721007 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-09201c46-e11a-5302-956e-912d17e7f9de', 'data_vg': 'ceph-09201c46-e11a-5302-956e-912d17e7f9de'})  2026-04-10 00:43:19.721019 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0863171e-1302-565f-bee5-d18b6804a785', 'data_vg': 'ceph-0863171e-1302-565f-bee5-d18b6804a785'})  2026-04-10 00:43:19.721030 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:19.721041 | orchestrator | 2026-04-10 00:43:19.721052 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-10 00:43:19.721062 | orchestrator | Friday 10 April 2026 00:43:19 +0000 (0:00:00.172) 0:01:09.598 ********** 2026-04-10 00:43:19.721073 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-09201c46-e11a-5302-956e-912d17e7f9de', 'data_vg': 'ceph-09201c46-e11a-5302-956e-912d17e7f9de'})  2026-04-10 00:43:19.721084 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0863171e-1302-565f-bee5-d18b6804a785', 'data_vg': 'ceph-0863171e-1302-565f-bee5-d18b6804a785'})  2026-04-10 00:43:19.721095 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:19.721106 | orchestrator | 2026-04-10 00:43:19.721117 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-10 00:43:19.721128 | orchestrator | Friday 10 April 2026 00:43:19 +0000 (0:00:00.161) 0:01:09.759 ********** 2026-04-10 00:43:19.721139 | orchestrator | ok: [testbed-node-5] => { 2026-04-10 00:43:19.721150 | orchestrator |  "lvm_report": { 2026-04-10 00:43:19.721162 | orchestrator |  "lv": [ 2026-04-10 00:43:19.721181 | orchestrator |  { 2026-04-10 00:43:19.721192 | orchestrator |  "lv_name": "osd-block-0863171e-1302-565f-bee5-d18b6804a785", 2026-04-10 00:43:19.721204 | orchestrator |  "vg_name": "ceph-0863171e-1302-565f-bee5-d18b6804a785" 2026-04-10 00:43:19.721215 | orchestrator |  }, 2026-04-10 00:43:19.721226 | orchestrator |  { 2026-04-10 00:43:19.721237 | orchestrator |  "lv_name": "osd-block-09201c46-e11a-5302-956e-912d17e7f9de", 2026-04-10 00:43:19.721249 | orchestrator |  "vg_name": "ceph-09201c46-e11a-5302-956e-912d17e7f9de" 2026-04-10 00:43:19.721259 | orchestrator |  } 2026-04-10 00:43:19.721270 | orchestrator |  ], 2026-04-10 00:43:19.721281 | orchestrator |  "pv": [ 2026-04-10 00:43:19.721292 | orchestrator |  { 2026-04-10 00:43:19.721303 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-10 00:43:19.721314 | orchestrator |  "vg_name": "ceph-09201c46-e11a-5302-956e-912d17e7f9de" 2026-04-10 00:43:19.721344 | orchestrator |  }, 2026-04-10 00:43:19.721356 | orchestrator |  { 2026-04-10 00:43:19.721366 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-10 00:43:19.721377 | orchestrator |  "vg_name": "ceph-0863171e-1302-565f-bee5-d18b6804a785" 2026-04-10 00:43:19.721388 | orchestrator |  } 2026-04-10 00:43:19.721399 | orchestrator |  ] 2026-04-10 00:43:19.721409 | orchestrator |  } 2026-04-10 00:43:19.721420 | orchestrator | } 2026-04-10 00:43:19.721431 | orchestrator | 2026-04-10 00:43:19.721442 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:43:19.721453 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-10 00:43:19.721464 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-10 00:43:19.721475 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-10 00:43:19.721486 | orchestrator | 2026-04-10 00:43:19.721497 | orchestrator | 2026-04-10 00:43:19.721507 | orchestrator | 2026-04-10 00:43:19.721518 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:43:19.721529 | orchestrator | Friday 10 April 2026 00:43:19 +0000 (0:00:00.156) 0:01:09.916 ********** 2026-04-10 00:43:19.721540 | orchestrator | =============================================================================== 2026-04-10 00:43:19.721551 | orchestrator | Create block VGs -------------------------------------------------------- 5.52s 2026-04-10 00:43:19.721562 | orchestrator | Create block LVs -------------------------------------------------------- 4.19s 2026-04-10 00:43:19.721572 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.92s 2026-04-10 00:43:19.721583 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.59s 2026-04-10 00:43:19.721594 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.55s 2026-04-10 00:43:19.721604 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.54s 2026-04-10 00:43:19.721615 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.53s 2026-04-10 00:43:19.721626 | orchestrator | Add known partitions to the list of available block devices ------------- 1.33s 2026-04-10 00:43:19.721643 | orchestrator | Add known links to the list of available block devices ------------------ 1.25s 2026-04-10 00:43:20.325625 | orchestrator | Print LVM report data --------------------------------------------------- 1.09s 2026-04-10 00:43:20.325716 | orchestrator | Add known partitions to the list of available block devices ------------- 0.96s 2026-04-10 00:43:20.325725 | orchestrator | Add known partitions to the list of available block devices ------------- 0.95s 2026-04-10 00:43:20.325732 | orchestrator | Add known partitions to the list of available block devices ------------- 0.85s 2026-04-10 00:43:20.325738 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.80s 2026-04-10 00:43:20.325769 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.73s 2026-04-10 00:43:20.325776 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2026-04-10 00:43:20.325795 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.69s 2026-04-10 00:43:20.325801 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.69s 2026-04-10 00:43:20.325807 | orchestrator | Get initial list of available block devices ----------------------------- 0.68s 2026-04-10 00:43:20.325814 | orchestrator | Combine JSON from _db/wal/db_wal_vgs_cmd_output ------------------------- 0.66s 2026-04-10 00:43:32.055889 | orchestrator | 2026-04-10 00:43:32 | INFO  | Prepare task for execution of facts. 2026-04-10 00:43:32.128990 | orchestrator | 2026-04-10 00:43:32 | INFO  | Task a20bd022-bb96-4b99-822d-dd186c9e38b4 (facts) was prepared for execution. 2026-04-10 00:43:32.129203 | orchestrator | 2026-04-10 00:43:32 | INFO  | It takes a moment until task a20bd022-bb96-4b99-822d-dd186c9e38b4 (facts) has been started and output is visible here. 2026-04-10 00:43:43.800834 | orchestrator | 2026-04-10 00:43:43.800954 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-10 00:43:43.800974 | orchestrator | 2026-04-10 00:43:43.800987 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-10 00:43:43.800999 | orchestrator | Friday 10 April 2026 00:43:35 +0000 (0:00:00.386) 0:00:00.386 ********** 2026-04-10 00:43:43.801010 | orchestrator | ok: [testbed-manager] 2026-04-10 00:43:43.801023 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:43:43.801034 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:43:43.801045 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:43:43.801056 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:43:43.801066 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:43:43.801077 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:43:43.801088 | orchestrator | 2026-04-10 00:43:43.801099 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-10 00:43:43.801110 | orchestrator | Friday 10 April 2026 00:43:37 +0000 (0:00:01.537) 0:00:01.924 ********** 2026-04-10 00:43:43.801121 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:43:43.801134 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:43:43.801144 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:43:43.801155 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:43:43.801166 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:43:43.801177 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:43:43.801188 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:43.801199 | orchestrator | 2026-04-10 00:43:43.801210 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-10 00:43:43.801220 | orchestrator | 2026-04-10 00:43:43.801232 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-10 00:43:43.801243 | orchestrator | Friday 10 April 2026 00:43:38 +0000 (0:00:01.191) 0:00:03.116 ********** 2026-04-10 00:43:43.801253 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:43:43.801264 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:43:43.801293 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:43:43.801315 | orchestrator | ok: [testbed-manager] 2026-04-10 00:43:43.801327 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:43:43.801412 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:43:43.801425 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:43:43.801437 | orchestrator | 2026-04-10 00:43:43.801450 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-10 00:43:43.801462 | orchestrator | 2026-04-10 00:43:43.801474 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-10 00:43:43.801487 | orchestrator | Friday 10 April 2026 00:43:43 +0000 (0:00:04.662) 0:00:07.778 ********** 2026-04-10 00:43:43.801500 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:43:43.801513 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:43:43.801551 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:43:43.801564 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:43:43.801575 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:43:43.801587 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:43:43.801599 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:43:43.801611 | orchestrator | 2026-04-10 00:43:43.801623 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:43:43.801635 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 00:43:43.801649 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 00:43:43.801661 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 00:43:43.801673 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 00:43:43.801685 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 00:43:43.801697 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 00:43:43.801710 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 00:43:43.801722 | orchestrator | 2026-04-10 00:43:43.801733 | orchestrator | 2026-04-10 00:43:43.801744 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:43:43.801755 | orchestrator | Friday 10 April 2026 00:43:43 +0000 (0:00:00.447) 0:00:08.226 ********** 2026-04-10 00:43:43.801766 | orchestrator | =============================================================================== 2026-04-10 00:43:43.801777 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.66s 2026-04-10 00:43:43.801787 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.54s 2026-04-10 00:43:43.801815 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.19s 2026-04-10 00:43:43.801827 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.45s 2026-04-10 00:43:55.178627 | orchestrator | 2026-04-10 00:43:55 | INFO  | Prepare task for execution of frr. 2026-04-10 00:43:55.246694 | orchestrator | 2026-04-10 00:43:55 | INFO  | Task 4efd99cb-f5f0-4de7-ba24-5a7c19ddb641 (frr) was prepared for execution. 2026-04-10 00:43:55.246786 | orchestrator | 2026-04-10 00:43:55 | INFO  | It takes a moment until task 4efd99cb-f5f0-4de7-ba24-5a7c19ddb641 (frr) has been started and output is visible here. 2026-04-10 00:44:17.970769 | orchestrator | 2026-04-10 00:44:17.970911 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-04-10 00:44:17.970931 | orchestrator | 2026-04-10 00:44:17.970943 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-04-10 00:44:17.970954 | orchestrator | Friday 10 April 2026 00:43:58 +0000 (0:00:00.269) 0:00:00.269 ********** 2026-04-10 00:44:17.971024 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-04-10 00:44:17.971041 | orchestrator | 2026-04-10 00:44:17.971053 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-04-10 00:44:17.971064 | orchestrator | Friday 10 April 2026 00:43:58 +0000 (0:00:00.214) 0:00:00.484 ********** 2026-04-10 00:44:17.971075 | orchestrator | changed: [testbed-manager] 2026-04-10 00:44:17.971088 | orchestrator | 2026-04-10 00:44:17.971099 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-04-10 00:44:17.971134 | orchestrator | Friday 10 April 2026 00:43:59 +0000 (0:00:01.361) 0:00:01.845 ********** 2026-04-10 00:44:17.971146 | orchestrator | changed: [testbed-manager] 2026-04-10 00:44:17.971157 | orchestrator | 2026-04-10 00:44:17.971168 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-04-10 00:44:17.971179 | orchestrator | Friday 10 April 2026 00:44:08 +0000 (0:00:08.290) 0:00:10.135 ********** 2026-04-10 00:44:17.971190 | orchestrator | ok: [testbed-manager] 2026-04-10 00:44:17.971201 | orchestrator | 2026-04-10 00:44:17.971213 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-04-10 00:44:17.971225 | orchestrator | Friday 10 April 2026 00:44:08 +0000 (0:00:00.925) 0:00:11.061 ********** 2026-04-10 00:44:17.971236 | orchestrator | changed: [testbed-manager] 2026-04-10 00:44:17.971247 | orchestrator | 2026-04-10 00:44:17.971257 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-04-10 00:44:17.971269 | orchestrator | Friday 10 April 2026 00:44:09 +0000 (0:00:00.911) 0:00:11.972 ********** 2026-04-10 00:44:17.971279 | orchestrator | ok: [testbed-manager] 2026-04-10 00:44:17.971290 | orchestrator | 2026-04-10 00:44:17.971303 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-04-10 00:44:17.971316 | orchestrator | Friday 10 April 2026 00:44:11 +0000 (0:00:01.185) 0:00:13.158 ********** 2026-04-10 00:44:17.971328 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:44:17.971341 | orchestrator | 2026-04-10 00:44:17.971383 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-04-10 00:44:17.971406 | orchestrator | Friday 10 April 2026 00:44:11 +0000 (0:00:00.156) 0:00:13.315 ********** 2026-04-10 00:44:17.971420 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:44:17.971432 | orchestrator | 2026-04-10 00:44:17.971445 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-04-10 00:44:17.971457 | orchestrator | Friday 10 April 2026 00:44:11 +0000 (0:00:00.287) 0:00:13.602 ********** 2026-04-10 00:44:17.971469 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:44:17.971481 | orchestrator | 2026-04-10 00:44:17.971494 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-04-10 00:44:17.971507 | orchestrator | Friday 10 April 2026 00:44:11 +0000 (0:00:00.159) 0:00:13.761 ********** 2026-04-10 00:44:17.971519 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:44:17.971531 | orchestrator | 2026-04-10 00:44:17.971543 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-04-10 00:44:17.971556 | orchestrator | Friday 10 April 2026 00:44:11 +0000 (0:00:00.131) 0:00:13.893 ********** 2026-04-10 00:44:17.971569 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:44:17.971581 | orchestrator | 2026-04-10 00:44:17.971593 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-04-10 00:44:17.971606 | orchestrator | Friday 10 April 2026 00:44:11 +0000 (0:00:00.199) 0:00:14.093 ********** 2026-04-10 00:44:17.971617 | orchestrator | changed: [testbed-manager] 2026-04-10 00:44:17.971627 | orchestrator | 2026-04-10 00:44:17.971638 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-04-10 00:44:17.971649 | orchestrator | Friday 10 April 2026 00:44:12 +0000 (0:00:00.913) 0:00:15.006 ********** 2026-04-10 00:44:17.971660 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-10 00:44:17.971671 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-04-10 00:44:17.971683 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-04-10 00:44:17.971694 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-04-10 00:44:17.971705 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-04-10 00:44:17.971716 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-04-10 00:44:17.971736 | orchestrator | 2026-04-10 00:44:17.971747 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-04-10 00:44:17.971759 | orchestrator | Friday 10 April 2026 00:44:15 +0000 (0:00:02.175) 0:00:17.181 ********** 2026-04-10 00:44:17.971770 | orchestrator | ok: [testbed-manager] 2026-04-10 00:44:17.971781 | orchestrator | 2026-04-10 00:44:17.971792 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-04-10 00:44:17.971803 | orchestrator | Friday 10 April 2026 00:44:16 +0000 (0:00:01.182) 0:00:18.363 ********** 2026-04-10 00:44:17.971814 | orchestrator | changed: [testbed-manager] 2026-04-10 00:44:17.971824 | orchestrator | 2026-04-10 00:44:17.971835 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:44:17.971847 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-10 00:44:17.971858 | orchestrator | 2026-04-10 00:44:17.971869 | orchestrator | 2026-04-10 00:44:17.971899 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:44:17.971911 | orchestrator | Friday 10 April 2026 00:44:17 +0000 (0:00:01.382) 0:00:19.746 ********** 2026-04-10 00:44:17.971922 | orchestrator | =============================================================================== 2026-04-10 00:44:17.971933 | orchestrator | osism.services.frr : Install frr package -------------------------------- 8.29s 2026-04-10 00:44:17.971963 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.18s 2026-04-10 00:44:17.971975 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.38s 2026-04-10 00:44:17.971992 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.36s 2026-04-10 00:44:17.972009 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.19s 2026-04-10 00:44:17.972028 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.18s 2026-04-10 00:44:17.972052 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.93s 2026-04-10 00:44:17.972075 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.91s 2026-04-10 00:44:17.972092 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.91s 2026-04-10 00:44:17.972109 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.29s 2026-04-10 00:44:17.972125 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.21s 2026-04-10 00:44:17.972142 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.20s 2026-04-10 00:44:17.972160 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.16s 2026-04-10 00:44:17.972177 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.16s 2026-04-10 00:44:17.972194 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.13s 2026-04-10 00:44:18.141789 | orchestrator | 2026-04-10 00:44:18.145298 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Fri Apr 10 00:44:18 UTC 2026 2026-04-10 00:44:18.145418 | orchestrator | 2026-04-10 00:44:19.284469 | orchestrator | 2026-04-10 00:44:19 | INFO  | Collection nutshell is prepared for execution 2026-04-10 00:44:19.399294 | orchestrator | 2026-04-10 00:44:19 | INFO  | A [0] - dotfiles 2026-04-10 00:44:29.539533 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [0] - homer 2026-04-10 00:44:29.539713 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [0] - netdata 2026-04-10 00:44:29.539739 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [0] - openstackclient 2026-04-10 00:44:29.539761 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [0] - phpmyadmin 2026-04-10 00:44:29.539781 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [0] - common 2026-04-10 00:44:29.542938 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [1] -- loadbalancer 2026-04-10 00:44:29.543235 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [2] --- opensearch 2026-04-10 00:44:29.543435 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [2] --- mariadb-ng 2026-04-10 00:44:29.543805 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [3] ---- horizon 2026-04-10 00:44:29.544083 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [3] ---- keystone 2026-04-10 00:44:29.544535 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [4] ----- neutron 2026-04-10 00:44:29.545442 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [5] ------ wait-for-nova 2026-04-10 00:44:29.545520 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [6] ------- octavia 2026-04-10 00:44:29.546666 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [4] ----- barbican 2026-04-10 00:44:29.546708 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [4] ----- designate 2026-04-10 00:44:29.546916 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [4] ----- ironic 2026-04-10 00:44:29.547577 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [4] ----- placement 2026-04-10 00:44:29.547634 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [4] ----- magnum 2026-04-10 00:44:29.549142 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [1] -- openvswitch 2026-04-10 00:44:29.549251 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [2] --- ovn 2026-04-10 00:44:29.549896 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [1] -- memcached 2026-04-10 00:44:29.549929 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [1] -- redis 2026-04-10 00:44:29.550274 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [1] -- rabbitmq-ng 2026-04-10 00:44:29.550660 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [0] - kubernetes 2026-04-10 00:44:29.553222 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [1] -- kubeconfig 2026-04-10 00:44:29.553269 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [1] -- copy-kubeconfig 2026-04-10 00:44:29.553763 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [0] - ceph 2026-04-10 00:44:29.555799 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [1] -- ceph-pools 2026-04-10 00:44:29.555845 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [2] --- copy-ceph-keys 2026-04-10 00:44:29.556132 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [3] ---- cephclient 2026-04-10 00:44:29.556161 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-04-10 00:44:29.556436 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [4] ----- wait-for-keystone 2026-04-10 00:44:29.557942 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [5] ------ kolla-ceph-rgw 2026-04-10 00:44:29.557973 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [5] ------ glance 2026-04-10 00:44:29.558062 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [5] ------ cinder 2026-04-10 00:44:29.558079 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [5] ------ nova 2026-04-10 00:44:29.558089 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [4] ----- prometheus 2026-04-10 00:44:29.558099 | orchestrator | 2026-04-10 00:44:29 | INFO  | A [5] ------ grafana 2026-04-10 00:44:29.770090 | orchestrator | 2026-04-10 00:44:29 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-04-10 00:44:29.770181 | orchestrator | 2026-04-10 00:44:29 | INFO  | Tasks are running in the background 2026-04-10 00:44:31.565045 | orchestrator | 2026-04-10 00:44:31 | INFO  | No task IDs specified, wait for all currently running tasks 2026-04-10 00:44:33.765941 | orchestrator | 2026-04-10 00:44:33 | INFO  | Task dd91da0c-6dc6-4644-95f2-1ffbc6eb8cfd is in state STARTED 2026-04-10 00:44:33.766245 | orchestrator | 2026-04-10 00:44:33 | INFO  | Task 94fbfb49-fbcb-4b8c-b617-d05aff94612f is in state STARTED 2026-04-10 00:44:33.767136 | orchestrator | 2026-04-10 00:44:33 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:44:33.767625 | orchestrator | 2026-04-10 00:44:33 | INFO  | Task 4a51407b-564c-416e-9338-03d342401520 is in state STARTED 2026-04-10 00:44:33.769838 | orchestrator | 2026-04-10 00:44:33 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:44:33.770478 | orchestrator | 2026-04-10 00:44:33 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:44:33.771160 | orchestrator | 2026-04-10 00:44:33 | INFO  | Task 0726adef-7376-4c73-80f8-a268c1d0154c is in state STARTED 2026-04-10 00:44:33.771190 | orchestrator | 2026-04-10 00:44:33 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:44:36.812697 | orchestrator | 2026-04-10 00:44:36 | INFO  | Task dd91da0c-6dc6-4644-95f2-1ffbc6eb8cfd is in state STARTED 2026-04-10 00:44:36.812814 | orchestrator | 2026-04-10 00:44:36 | INFO  | Task 94fbfb49-fbcb-4b8c-b617-d05aff94612f is in state STARTED 2026-04-10 00:44:36.813744 | orchestrator | 2026-04-10 00:44:36 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:44:36.815294 | orchestrator | 2026-04-10 00:44:36 | INFO  | Task 4a51407b-564c-416e-9338-03d342401520 is in state STARTED 2026-04-10 00:44:36.816313 | orchestrator | 2026-04-10 00:44:36 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:44:36.816984 | orchestrator | 2026-04-10 00:44:36 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:44:36.817464 | orchestrator | 2026-04-10 00:44:36 | INFO  | Task 0726adef-7376-4c73-80f8-a268c1d0154c is in state STARTED 2026-04-10 00:44:36.817485 | orchestrator | 2026-04-10 00:44:36 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:44:39.859086 | orchestrator | 2026-04-10 00:44:39 | INFO  | Task dd91da0c-6dc6-4644-95f2-1ffbc6eb8cfd is in state STARTED 2026-04-10 00:44:39.859244 | orchestrator | 2026-04-10 00:44:39 | INFO  | Task 94fbfb49-fbcb-4b8c-b617-d05aff94612f is in state STARTED 2026-04-10 00:44:39.859271 | orchestrator | 2026-04-10 00:44:39 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:44:39.859291 | orchestrator | 2026-04-10 00:44:39 | INFO  | Task 4a51407b-564c-416e-9338-03d342401520 is in state STARTED 2026-04-10 00:44:39.859311 | orchestrator | 2026-04-10 00:44:39 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:44:39.859347 | orchestrator | 2026-04-10 00:44:39 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:44:39.860526 | orchestrator | 2026-04-10 00:44:39 | INFO  | Task 0726adef-7376-4c73-80f8-a268c1d0154c is in state STARTED 2026-04-10 00:44:39.860558 | orchestrator | 2026-04-10 00:44:39 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:44:43.003808 | orchestrator | 2026-04-10 00:44:42 | INFO  | Task dd91da0c-6dc6-4644-95f2-1ffbc6eb8cfd is in state STARTED 2026-04-10 00:44:43.003884 | orchestrator | 2026-04-10 00:44:42 | INFO  | Task 94fbfb49-fbcb-4b8c-b617-d05aff94612f is in state STARTED 2026-04-10 00:44:43.003892 | orchestrator | 2026-04-10 00:44:42 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:44:43.003898 | orchestrator | 2026-04-10 00:44:42 | INFO  | Task 4a51407b-564c-416e-9338-03d342401520 is in state STARTED 2026-04-10 00:44:43.003903 | orchestrator | 2026-04-10 00:44:42 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:44:43.003908 | orchestrator | 2026-04-10 00:44:42 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:44:43.003932 | orchestrator | 2026-04-10 00:44:42 | INFO  | Task 0726adef-7376-4c73-80f8-a268c1d0154c is in state STARTED 2026-04-10 00:44:43.003938 | orchestrator | 2026-04-10 00:44:42 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:44:46.055803 | orchestrator | 2026-04-10 00:44:46 | INFO  | Task dd91da0c-6dc6-4644-95f2-1ffbc6eb8cfd is in state STARTED 2026-04-10 00:44:46.057075 | orchestrator | 2026-04-10 00:44:46 | INFO  | Task 94fbfb49-fbcb-4b8c-b617-d05aff94612f is in state STARTED 2026-04-10 00:44:46.057803 | orchestrator | 2026-04-10 00:44:46 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:44:46.058144 | orchestrator | 2026-04-10 00:44:46 | INFO  | Task 4a51407b-564c-416e-9338-03d342401520 is in state STARTED 2026-04-10 00:44:46.059735 | orchestrator | 2026-04-10 00:44:46 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:44:46.060156 | orchestrator | 2026-04-10 00:44:46 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:44:46.060655 | orchestrator | 2026-04-10 00:44:46 | INFO  | Task 0726adef-7376-4c73-80f8-a268c1d0154c is in state STARTED 2026-04-10 00:44:46.060686 | orchestrator | 2026-04-10 00:44:46 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:44:49.102899 | orchestrator | 2026-04-10 00:44:49 | INFO  | Task dd91da0c-6dc6-4644-95f2-1ffbc6eb8cfd is in state STARTED 2026-04-10 00:44:49.102990 | orchestrator | 2026-04-10 00:44:49 | INFO  | Task 94fbfb49-fbcb-4b8c-b617-d05aff94612f is in state STARTED 2026-04-10 00:44:49.103002 | orchestrator | 2026-04-10 00:44:49 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:44:49.103011 | orchestrator | 2026-04-10 00:44:49 | INFO  | Task 4a51407b-564c-416e-9338-03d342401520 is in state STARTED 2026-04-10 00:44:49.103019 | orchestrator | 2026-04-10 00:44:49 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:44:49.103027 | orchestrator | 2026-04-10 00:44:49 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:44:49.110519 | orchestrator | 2026-04-10 00:44:49 | INFO  | Task 0726adef-7376-4c73-80f8-a268c1d0154c is in state STARTED 2026-04-10 00:44:49.110598 | orchestrator | 2026-04-10 00:44:49 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:44:52.174967 | orchestrator | 2026-04-10 00:44:52 | INFO  | Task dd91da0c-6dc6-4644-95f2-1ffbc6eb8cfd is in state STARTED 2026-04-10 00:44:52.175083 | orchestrator | 2026-04-10 00:44:52 | INFO  | Task 94fbfb49-fbcb-4b8c-b617-d05aff94612f is in state STARTED 2026-04-10 00:44:52.175104 | orchestrator | 2026-04-10 00:44:52 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:44:52.175942 | orchestrator | 2026-04-10 00:44:52 | INFO  | Task 4a51407b-564c-416e-9338-03d342401520 is in state STARTED 2026-04-10 00:44:52.176659 | orchestrator | 2026-04-10 00:44:52 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:44:52.177544 | orchestrator | 2026-04-10 00:44:52 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:44:52.178345 | orchestrator | 2026-04-10 00:44:52 | INFO  | Task 0726adef-7376-4c73-80f8-a268c1d0154c is in state STARTED 2026-04-10 00:44:52.178417 | orchestrator | 2026-04-10 00:44:52 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:44:55.373092 | orchestrator | 2026-04-10 00:44:55 | INFO  | Task dd91da0c-6dc6-4644-95f2-1ffbc6eb8cfd is in state SUCCESS 2026-04-10 00:44:55.373191 | orchestrator | 2026-04-10 00:44:55.373206 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-04-10 00:44:55.373211 | orchestrator | 2026-04-10 00:44:55.373228 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-04-10 00:44:55.373233 | orchestrator | Friday 10 April 2026 00:44:39 +0000 (0:00:00.631) 0:00:00.631 ********** 2026-04-10 00:44:55.373237 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:44:55.373242 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:44:55.373245 | orchestrator | changed: [testbed-manager] 2026-04-10 00:44:55.373249 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:44:55.373254 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:44:55.373261 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:44:55.373266 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:44:55.373272 | orchestrator | 2026-04-10 00:44:55.373278 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-04-10 00:44:55.373284 | orchestrator | Friday 10 April 2026 00:44:44 +0000 (0:00:04.710) 0:00:05.342 ********** 2026-04-10 00:44:55.373291 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-10 00:44:55.373297 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-10 00:44:55.373302 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-10 00:44:55.373308 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-10 00:44:55.373314 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-10 00:44:55.373319 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-10 00:44:55.373325 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-10 00:44:55.373330 | orchestrator | 2026-04-10 00:44:55.373337 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-04-10 00:44:55.373343 | orchestrator | Friday 10 April 2026 00:44:46 +0000 (0:00:01.919) 0:00:07.261 ********** 2026-04-10 00:44:55.373351 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-10 00:44:45.205438', 'end': '2026-04-10 00:44:45.211696', 'delta': '0:00:00.006258', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-10 00:44:55.373360 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-10 00:44:45.289735', 'end': '2026-04-10 00:44:46.296234', 'delta': '0:00:01.006499', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-10 00:44:55.373384 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-10 00:44:45.206318', 'end': '2026-04-10 00:44:45.215273', 'delta': '0:00:00.008955', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-10 00:44:55.373423 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-10 00:44:45.283054', 'end': '2026-04-10 00:44:45.293211', 'delta': '0:00:00.010157', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-10 00:44:55.373431 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-10 00:44:45.268569', 'end': '2026-04-10 00:44:45.275985', 'delta': '0:00:00.007416', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-10 00:44:55.373437 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-10 00:44:45.485587', 'end': '2026-04-10 00:44:45.494829', 'delta': '0:00:00.009242', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-10 00:44:55.373443 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-10 00:44:45.138162', 'end': '2026-04-10 00:44:45.143357', 'delta': '0:00:00.005195', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-10 00:44:55.373450 | orchestrator | 2026-04-10 00:44:55.373566 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-04-10 00:44:55.373574 | orchestrator | Friday 10 April 2026 00:44:47 +0000 (0:00:01.083) 0:00:08.345 ********** 2026-04-10 00:44:55.373578 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-10 00:44:55.373582 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-10 00:44:55.373594 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-10 00:44:55.373598 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-10 00:44:55.373601 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-10 00:44:55.373605 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-10 00:44:55.373609 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-10 00:44:55.373613 | orchestrator | 2026-04-10 00:44:55.373616 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-04-10 00:44:55.373620 | orchestrator | Friday 10 April 2026 00:44:49 +0000 (0:00:01.508) 0:00:09.853 ********** 2026-04-10 00:44:55.373624 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-04-10 00:44:55.373629 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-04-10 00:44:55.373633 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-04-10 00:44:55.373716 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-04-10 00:44:55.373749 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-04-10 00:44:55.373755 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-04-10 00:44:55.373758 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-04-10 00:44:55.373762 | orchestrator | 2026-04-10 00:44:55.373766 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:44:55.373779 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:44:55.373784 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:44:55.373788 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:44:55.373792 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:44:55.373796 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:44:55.373800 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:44:55.373826 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:44:55.373867 | orchestrator | 2026-04-10 00:44:55.373871 | orchestrator | 2026-04-10 00:44:55.373875 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:44:55.373879 | orchestrator | Friday 10 April 2026 00:44:52 +0000 (0:00:03.693) 0:00:13.546 ********** 2026-04-10 00:44:55.374099 | orchestrator | =============================================================================== 2026-04-10 00:44:55.374111 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.71s 2026-04-10 00:44:55.374117 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.69s 2026-04-10 00:44:55.374123 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.92s 2026-04-10 00:44:55.374129 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.51s 2026-04-10 00:44:55.374133 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.08s 2026-04-10 00:44:55.374137 | orchestrator | 2026-04-10 00:44:55 | INFO  | Task 94fbfb49-fbcb-4b8c-b617-d05aff94612f is in state STARTED 2026-04-10 00:44:55.374868 | orchestrator | 2026-04-10 00:44:55 | INFO  | Task 8e764b92-3fb1-4b35-96fe-28163d467752 is in state STARTED 2026-04-10 00:44:55.375736 | orchestrator | 2026-04-10 00:44:55 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:44:55.376496 | orchestrator | 2026-04-10 00:44:55 | INFO  | Task 4a51407b-564c-416e-9338-03d342401520 is in state STARTED 2026-04-10 00:44:55.378072 | orchestrator | 2026-04-10 00:44:55 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:44:55.378812 | orchestrator | 2026-04-10 00:44:55 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:44:55.379813 | orchestrator | 2026-04-10 00:44:55 | INFO  | Task 0726adef-7376-4c73-80f8-a268c1d0154c is in state STARTED 2026-04-10 00:44:55.379838 | orchestrator | 2026-04-10 00:44:55 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:44:58.501561 | orchestrator | 2026-04-10 00:44:58 | INFO  | Task 94fbfb49-fbcb-4b8c-b617-d05aff94612f is in state STARTED 2026-04-10 00:44:58.501654 | orchestrator | 2026-04-10 00:44:58 | INFO  | Task 8e764b92-3fb1-4b35-96fe-28163d467752 is in state STARTED 2026-04-10 00:44:58.501663 | orchestrator | 2026-04-10 00:44:58 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:44:58.501669 | orchestrator | 2026-04-10 00:44:58 | INFO  | Task 4a51407b-564c-416e-9338-03d342401520 is in state STARTED 2026-04-10 00:44:58.501676 | orchestrator | 2026-04-10 00:44:58 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:44:58.501701 | orchestrator | 2026-04-10 00:44:58 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:44:58.501707 | orchestrator | 2026-04-10 00:44:58 | INFO  | Task 0726adef-7376-4c73-80f8-a268c1d0154c is in state STARTED 2026-04-10 00:44:58.501714 | orchestrator | 2026-04-10 00:44:58 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:45:01.634829 | orchestrator | 2026-04-10 00:45:01 | INFO  | Task 94fbfb49-fbcb-4b8c-b617-d05aff94612f is in state STARTED 2026-04-10 00:45:01.714264 | orchestrator | 2026-04-10 00:45:01 | INFO  | Task 8e764b92-3fb1-4b35-96fe-28163d467752 is in state STARTED 2026-04-10 00:45:01.714338 | orchestrator | 2026-04-10 00:45:01 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:45:01.714344 | orchestrator | 2026-04-10 00:45:01 | INFO  | Task 4a51407b-564c-416e-9338-03d342401520 is in state STARTED 2026-04-10 00:45:01.714348 | orchestrator | 2026-04-10 00:45:01 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:45:01.714352 | orchestrator | 2026-04-10 00:45:01 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:45:01.714357 | orchestrator | 2026-04-10 00:45:01 | INFO  | Task 0726adef-7376-4c73-80f8-a268c1d0154c is in state STARTED 2026-04-10 00:45:01.714361 | orchestrator | 2026-04-10 00:45:01 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:45:04.732087 | orchestrator | 2026-04-10 00:45:04 | INFO  | Task 94fbfb49-fbcb-4b8c-b617-d05aff94612f is in state STARTED 2026-04-10 00:45:04.735118 | orchestrator | 2026-04-10 00:45:04 | INFO  | Task 8e764b92-3fb1-4b35-96fe-28163d467752 is in state STARTED 2026-04-10 00:45:04.750367 | orchestrator | 2026-04-10 00:45:04 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:45:04.753670 | orchestrator | 2026-04-10 00:45:04 | INFO  | Task 4a51407b-564c-416e-9338-03d342401520 is in state STARTED 2026-04-10 00:45:04.755223 | orchestrator | 2026-04-10 00:45:04 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:45:04.757261 | orchestrator | 2026-04-10 00:45:04 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:45:04.758199 | orchestrator | 2026-04-10 00:45:04 | INFO  | Task 0726adef-7376-4c73-80f8-a268c1d0154c is in state STARTED 2026-04-10 00:45:04.758622 | orchestrator | 2026-04-10 00:45:04 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:45:07.851625 | orchestrator | 2026-04-10 00:45:07 | INFO  | Task 94fbfb49-fbcb-4b8c-b617-d05aff94612f is in state STARTED 2026-04-10 00:45:07.851702 | orchestrator | 2026-04-10 00:45:07 | INFO  | Task 8e764b92-3fb1-4b35-96fe-28163d467752 is in state STARTED 2026-04-10 00:45:07.851716 | orchestrator | 2026-04-10 00:45:07 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:45:07.851726 | orchestrator | 2026-04-10 00:45:07 | INFO  | Task 4a51407b-564c-416e-9338-03d342401520 is in state STARTED 2026-04-10 00:45:07.851732 | orchestrator | 2026-04-10 00:45:07 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:45:07.851737 | orchestrator | 2026-04-10 00:45:07 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:45:07.851743 | orchestrator | 2026-04-10 00:45:07 | INFO  | Task 0726adef-7376-4c73-80f8-a268c1d0154c is in state STARTED 2026-04-10 00:45:07.851749 | orchestrator | 2026-04-10 00:45:07 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:45:10.905946 | orchestrator | 2026-04-10 00:45:10 | INFO  | Task 94fbfb49-fbcb-4b8c-b617-d05aff94612f is in state STARTED 2026-04-10 00:45:10.906198 | orchestrator | 2026-04-10 00:45:10 | INFO  | Task 8e764b92-3fb1-4b35-96fe-28163d467752 is in state STARTED 2026-04-10 00:45:10.906892 | orchestrator | 2026-04-10 00:45:10 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:45:10.908805 | orchestrator | 2026-04-10 00:45:10 | INFO  | Task 4a51407b-564c-416e-9338-03d342401520 is in state STARTED 2026-04-10 00:45:10.910734 | orchestrator | 2026-04-10 00:45:10 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:45:10.911580 | orchestrator | 2026-04-10 00:45:10 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:45:10.913507 | orchestrator | 2026-04-10 00:45:10 | INFO  | Task 0726adef-7376-4c73-80f8-a268c1d0154c is in state STARTED 2026-04-10 00:45:10.913560 | orchestrator | 2026-04-10 00:45:10 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:45:13.998084 | orchestrator | 2026-04-10 00:45:13 | INFO  | Task 94fbfb49-fbcb-4b8c-b617-d05aff94612f is in state STARTED 2026-04-10 00:45:13.998190 | orchestrator | 2026-04-10 00:45:13 | INFO  | Task 8e764b92-3fb1-4b35-96fe-28163d467752 is in state STARTED 2026-04-10 00:45:14.002454 | orchestrator | 2026-04-10 00:45:13 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:45:14.002538 | orchestrator | 2026-04-10 00:45:13 | INFO  | Task 4a51407b-564c-416e-9338-03d342401520 is in state STARTED 2026-04-10 00:45:14.002547 | orchestrator | 2026-04-10 00:45:13 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:45:14.002554 | orchestrator | 2026-04-10 00:45:13 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:45:14.002561 | orchestrator | 2026-04-10 00:45:13 | INFO  | Task 0726adef-7376-4c73-80f8-a268c1d0154c is in state STARTED 2026-04-10 00:45:14.002569 | orchestrator | 2026-04-10 00:45:14 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:45:17.145713 | orchestrator | 2026-04-10 00:45:17 | INFO  | Task 94fbfb49-fbcb-4b8c-b617-d05aff94612f is in state STARTED 2026-04-10 00:45:17.145811 | orchestrator | 2026-04-10 00:45:17 | INFO  | Task 8e764b92-3fb1-4b35-96fe-28163d467752 is in state STARTED 2026-04-10 00:45:17.145830 | orchestrator | 2026-04-10 00:45:17 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:45:17.145844 | orchestrator | 2026-04-10 00:45:17 | INFO  | Task 4a51407b-564c-416e-9338-03d342401520 is in state STARTED 2026-04-10 00:45:17.145883 | orchestrator | 2026-04-10 00:45:17 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:45:17.145898 | orchestrator | 2026-04-10 00:45:17 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:45:17.145910 | orchestrator | 2026-04-10 00:45:17 | INFO  | Task 0726adef-7376-4c73-80f8-a268c1d0154c is in state STARTED 2026-04-10 00:45:17.145922 | orchestrator | 2026-04-10 00:45:17 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:45:20.128569 | orchestrator | 2026-04-10 00:45:20 | INFO  | Task 94fbfb49-fbcb-4b8c-b617-d05aff94612f is in state STARTED 2026-04-10 00:45:20.136939 | orchestrator | 2026-04-10 00:45:20 | INFO  | Task 8e764b92-3fb1-4b35-96fe-28163d467752 is in state STARTED 2026-04-10 00:45:20.139303 | orchestrator | 2026-04-10 00:45:20 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:45:20.139995 | orchestrator | 2026-04-10 00:45:20 | INFO  | Task 4a51407b-564c-416e-9338-03d342401520 is in state STARTED 2026-04-10 00:45:20.140477 | orchestrator | 2026-04-10 00:45:20 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:45:20.141340 | orchestrator | 2026-04-10 00:45:20 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:45:20.142046 | orchestrator | 2026-04-10 00:45:20 | INFO  | Task 0726adef-7376-4c73-80f8-a268c1d0154c is in state STARTED 2026-04-10 00:45:20.142077 | orchestrator | 2026-04-10 00:45:20 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:45:23.233690 | orchestrator | 2026-04-10 00:45:23 | INFO  | Task 94fbfb49-fbcb-4b8c-b617-d05aff94612f is in state STARTED 2026-04-10 00:45:23.236944 | orchestrator | 2026-04-10 00:45:23 | INFO  | Task 8e764b92-3fb1-4b35-96fe-28163d467752 is in state STARTED 2026-04-10 00:45:23.237017 | orchestrator | 2026-04-10 00:45:23 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:45:23.237025 | orchestrator | 2026-04-10 00:45:23 | INFO  | Task 4a51407b-564c-416e-9338-03d342401520 is in state STARTED 2026-04-10 00:45:23.238986 | orchestrator | 2026-04-10 00:45:23 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:45:23.239075 | orchestrator | 2026-04-10 00:45:23 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:45:23.239667 | orchestrator | 2026-04-10 00:45:23 | INFO  | Task 0726adef-7376-4c73-80f8-a268c1d0154c is in state SUCCESS 2026-04-10 00:45:23.239705 | orchestrator | 2026-04-10 00:45:23 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:45:26.308947 | orchestrator | 2026-04-10 00:45:26 | INFO  | Task 94fbfb49-fbcb-4b8c-b617-d05aff94612f is in state STARTED 2026-04-10 00:45:26.309065 | orchestrator | 2026-04-10 00:45:26 | INFO  | Task 8e764b92-3fb1-4b35-96fe-28163d467752 is in state STARTED 2026-04-10 00:45:26.309305 | orchestrator | 2026-04-10 00:45:26 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:45:26.310079 | orchestrator | 2026-04-10 00:45:26 | INFO  | Task 4a51407b-564c-416e-9338-03d342401520 is in state STARTED 2026-04-10 00:45:26.310309 | orchestrator | 2026-04-10 00:45:26 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:45:26.311364 | orchestrator | 2026-04-10 00:45:26 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:45:26.311507 | orchestrator | 2026-04-10 00:45:26 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:45:29.361092 | orchestrator | 2026-04-10 00:45:29 | INFO  | Task 94fbfb49-fbcb-4b8c-b617-d05aff94612f is in state STARTED 2026-04-10 00:45:29.362211 | orchestrator | 2026-04-10 00:45:29 | INFO  | Task 8e764b92-3fb1-4b35-96fe-28163d467752 is in state STARTED 2026-04-10 00:45:29.366891 | orchestrator | 2026-04-10 00:45:29 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:45:29.380419 | orchestrator | 2026-04-10 00:45:29 | INFO  | Task 4a51407b-564c-416e-9338-03d342401520 is in state SUCCESS 2026-04-10 00:45:29.380503 | orchestrator | 2026-04-10 00:45:29 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:45:29.380514 | orchestrator | 2026-04-10 00:45:29 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:45:29.380522 | orchestrator | 2026-04-10 00:45:29 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:45:32.473832 | orchestrator | 2026-04-10 00:45:32 | INFO  | Task 94fbfb49-fbcb-4b8c-b617-d05aff94612f is in state STARTED 2026-04-10 00:45:32.473925 | orchestrator | 2026-04-10 00:45:32 | INFO  | Task 8e764b92-3fb1-4b35-96fe-28163d467752 is in state STARTED 2026-04-10 00:45:32.473932 | orchestrator | 2026-04-10 00:45:32 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:45:32.473937 | orchestrator | 2026-04-10 00:45:32 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:45:32.473942 | orchestrator | 2026-04-10 00:45:32 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:45:32.473947 | orchestrator | 2026-04-10 00:45:32 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:45:35.500303 | orchestrator | 2026-04-10 00:45:35 | INFO  | Task 94fbfb49-fbcb-4b8c-b617-d05aff94612f is in state STARTED 2026-04-10 00:45:35.501536 | orchestrator | 2026-04-10 00:45:35 | INFO  | Task 8e764b92-3fb1-4b35-96fe-28163d467752 is in state STARTED 2026-04-10 00:45:35.502980 | orchestrator | 2026-04-10 00:45:35 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:45:35.549774 | orchestrator | 2026-04-10 00:45:35 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:45:35.549975 | orchestrator | 2026-04-10 00:45:35 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:45:35.549994 | orchestrator | 2026-04-10 00:45:35 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:45:38.630326 | orchestrator | 2026-04-10 00:45:38 | INFO  | Task 94fbfb49-fbcb-4b8c-b617-d05aff94612f is in state STARTED 2026-04-10 00:45:38.630620 | orchestrator | 2026-04-10 00:45:38 | INFO  | Task 8e764b92-3fb1-4b35-96fe-28163d467752 is in state STARTED 2026-04-10 00:45:38.631858 | orchestrator | 2026-04-10 00:45:38 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:45:38.632135 | orchestrator | 2026-04-10 00:45:38 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:45:38.633334 | orchestrator | 2026-04-10 00:45:38 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:45:38.633462 | orchestrator | 2026-04-10 00:45:38 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:45:41.705324 | orchestrator | 2026-04-10 00:45:41 | INFO  | Task 94fbfb49-fbcb-4b8c-b617-d05aff94612f is in state STARTED 2026-04-10 00:45:41.707474 | orchestrator | 2026-04-10 00:45:41 | INFO  | Task 8e764b92-3fb1-4b35-96fe-28163d467752 is in state STARTED 2026-04-10 00:45:41.708232 | orchestrator | 2026-04-10 00:45:41 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:45:41.709936 | orchestrator | 2026-04-10 00:45:41 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:45:41.712886 | orchestrator | 2026-04-10 00:45:41 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:45:41.712930 | orchestrator | 2026-04-10 00:45:41 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:45:44.772322 | orchestrator | 2026-04-10 00:45:44 | INFO  | Task 94fbfb49-fbcb-4b8c-b617-d05aff94612f is in state STARTED 2026-04-10 00:45:44.772563 | orchestrator | 2026-04-10 00:45:44 | INFO  | Task 8e764b92-3fb1-4b35-96fe-28163d467752 is in state STARTED 2026-04-10 00:45:44.773770 | orchestrator | 2026-04-10 00:45:44 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:45:44.774976 | orchestrator | 2026-04-10 00:45:44 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:45:44.776849 | orchestrator | 2026-04-10 00:45:44 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:45:44.776893 | orchestrator | 2026-04-10 00:45:44 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:45:47.814279 | orchestrator | 2026-04-10 00:45:47 | INFO  | Task 94fbfb49-fbcb-4b8c-b617-d05aff94612f is in state STARTED 2026-04-10 00:45:47.815327 | orchestrator | 2026-04-10 00:45:47 | INFO  | Task 8e764b92-3fb1-4b35-96fe-28163d467752 is in state STARTED 2026-04-10 00:45:47.816338 | orchestrator | 2026-04-10 00:45:47 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:45:47.817591 | orchestrator | 2026-04-10 00:45:47 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:45:47.818868 | orchestrator | 2026-04-10 00:45:47 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:45:47.819013 | orchestrator | 2026-04-10 00:45:47 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:45:50.859714 | orchestrator | 2026-04-10 00:45:50 | INFO  | Task 94fbfb49-fbcb-4b8c-b617-d05aff94612f is in state STARTED 2026-04-10 00:45:50.860424 | orchestrator | 2026-04-10 00:45:50 | INFO  | Task 8e764b92-3fb1-4b35-96fe-28163d467752 is in state STARTED 2026-04-10 00:45:50.861094 | orchestrator | 2026-04-10 00:45:50 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:45:50.862812 | orchestrator | 2026-04-10 00:45:50 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:45:50.865951 | orchestrator | 2026-04-10 00:45:50 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:45:50.866071 | orchestrator | 2026-04-10 00:45:50 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:45:53.892242 | orchestrator | 2026-04-10 00:45:53.892306 | orchestrator | 2026-04-10 00:45:53.892312 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-04-10 00:45:53.892317 | orchestrator | 2026-04-10 00:45:53.892322 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-04-10 00:45:53.892327 | orchestrator | Friday 10 April 2026 00:44:40 +0000 (0:00:01.050) 0:00:01.050 ********** 2026-04-10 00:45:53.892331 | orchestrator | ok: [testbed-manager] => { 2026-04-10 00:45:53.892337 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-04-10 00:45:53.892342 | orchestrator | } 2026-04-10 00:45:53.892347 | orchestrator | 2026-04-10 00:45:53.892351 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-04-10 00:45:53.892355 | orchestrator | Friday 10 April 2026 00:44:41 +0000 (0:00:00.856) 0:00:01.906 ********** 2026-04-10 00:45:53.892359 | orchestrator | ok: [testbed-manager] 2026-04-10 00:45:53.892364 | orchestrator | 2026-04-10 00:45:53.892368 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-04-10 00:45:53.892386 | orchestrator | Friday 10 April 2026 00:44:43 +0000 (0:00:02.352) 0:00:04.259 ********** 2026-04-10 00:45:53.892429 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-04-10 00:45:53.892434 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-04-10 00:45:53.892438 | orchestrator | 2026-04-10 00:45:53.892442 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-04-10 00:45:53.892446 | orchestrator | Friday 10 April 2026 00:44:45 +0000 (0:00:01.566) 0:00:05.826 ********** 2026-04-10 00:45:53.892450 | orchestrator | changed: [testbed-manager] 2026-04-10 00:45:53.892454 | orchestrator | 2026-04-10 00:45:53.892458 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-04-10 00:45:53.892461 | orchestrator | Friday 10 April 2026 00:44:48 +0000 (0:00:02.919) 0:00:08.746 ********** 2026-04-10 00:45:53.892465 | orchestrator | changed: [testbed-manager] 2026-04-10 00:45:53.892469 | orchestrator | 2026-04-10 00:45:53.892473 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-04-10 00:45:53.892476 | orchestrator | Friday 10 April 2026 00:44:50 +0000 (0:00:02.206) 0:00:10.952 ********** 2026-04-10 00:45:53.892480 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-04-10 00:45:53.892484 | orchestrator | ok: [testbed-manager] 2026-04-10 00:45:53.892488 | orchestrator | 2026-04-10 00:45:53.892492 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-04-10 00:45:53.892496 | orchestrator | Friday 10 April 2026 00:45:16 +0000 (0:00:26.093) 0:00:37.045 ********** 2026-04-10 00:45:53.892500 | orchestrator | changed: [testbed-manager] 2026-04-10 00:45:53.892503 | orchestrator | 2026-04-10 00:45:53.892507 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:45:53.892511 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:45:53.892517 | orchestrator | 2026-04-10 00:45:53.892521 | orchestrator | 2026-04-10 00:45:53.892529 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:45:53.892533 | orchestrator | Friday 10 April 2026 00:45:21 +0000 (0:00:04.705) 0:00:41.751 ********** 2026-04-10 00:45:53.892537 | orchestrator | =============================================================================== 2026-04-10 00:45:53.892563 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.09s 2026-04-10 00:45:53.892568 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 4.71s 2026-04-10 00:45:53.892571 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.92s 2026-04-10 00:45:53.892575 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.35s 2026-04-10 00:45:53.892579 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.21s 2026-04-10 00:45:53.892583 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.57s 2026-04-10 00:45:53.892586 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.86s 2026-04-10 00:45:53.892590 | orchestrator | 2026-04-10 00:45:53.892594 | orchestrator | 2026-04-10 00:45:53.892598 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-04-10 00:45:53.892601 | orchestrator | 2026-04-10 00:45:53.892605 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-04-10 00:45:53.892609 | orchestrator | Friday 10 April 2026 00:44:40 +0000 (0:00:00.847) 0:00:00.847 ********** 2026-04-10 00:45:53.892613 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-04-10 00:45:53.892618 | orchestrator | 2026-04-10 00:45:53.892622 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-04-10 00:45:53.892625 | orchestrator | Friday 10 April 2026 00:44:41 +0000 (0:00:00.916) 0:00:01.764 ********** 2026-04-10 00:45:53.892629 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-04-10 00:45:53.892638 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-04-10 00:45:53.892642 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-04-10 00:45:53.892646 | orchestrator | 2026-04-10 00:45:53.892650 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-04-10 00:45:53.892654 | orchestrator | Friday 10 April 2026 00:44:43 +0000 (0:00:02.550) 0:00:04.314 ********** 2026-04-10 00:45:53.892657 | orchestrator | changed: [testbed-manager] 2026-04-10 00:45:53.892661 | orchestrator | 2026-04-10 00:45:53.892665 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-04-10 00:45:53.892669 | orchestrator | Friday 10 April 2026 00:44:45 +0000 (0:00:01.720) 0:00:06.034 ********** 2026-04-10 00:45:53.892683 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-04-10 00:45:53.892687 | orchestrator | ok: [testbed-manager] 2026-04-10 00:45:53.892691 | orchestrator | 2026-04-10 00:45:53.892695 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-04-10 00:45:53.892698 | orchestrator | Friday 10 April 2026 00:45:19 +0000 (0:00:34.323) 0:00:40.358 ********** 2026-04-10 00:45:53.892702 | orchestrator | changed: [testbed-manager] 2026-04-10 00:45:53.892706 | orchestrator | 2026-04-10 00:45:53.892710 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-04-10 00:45:53.892713 | orchestrator | Friday 10 April 2026 00:45:21 +0000 (0:00:02.062) 0:00:42.420 ********** 2026-04-10 00:45:53.892717 | orchestrator | ok: [testbed-manager] 2026-04-10 00:45:53.892721 | orchestrator | 2026-04-10 00:45:53.892725 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-04-10 00:45:53.892729 | orchestrator | Friday 10 April 2026 00:45:23 +0000 (0:00:01.448) 0:00:43.869 ********** 2026-04-10 00:45:53.892732 | orchestrator | changed: [testbed-manager] 2026-04-10 00:45:53.892736 | orchestrator | 2026-04-10 00:45:53.892740 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-04-10 00:45:53.892743 | orchestrator | Friday 10 April 2026 00:45:25 +0000 (0:00:01.866) 0:00:45.736 ********** 2026-04-10 00:45:53.892747 | orchestrator | changed: [testbed-manager] 2026-04-10 00:45:53.892751 | orchestrator | 2026-04-10 00:45:53.892755 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-04-10 00:45:53.892758 | orchestrator | Friday 10 April 2026 00:45:25 +0000 (0:00:00.776) 0:00:46.512 ********** 2026-04-10 00:45:53.892762 | orchestrator | changed: [testbed-manager] 2026-04-10 00:45:53.892766 | orchestrator | 2026-04-10 00:45:53.892770 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-04-10 00:45:53.892773 | orchestrator | Friday 10 April 2026 00:45:26 +0000 (0:00:00.637) 0:00:47.149 ********** 2026-04-10 00:45:53.892777 | orchestrator | ok: [testbed-manager] 2026-04-10 00:45:53.892781 | orchestrator | 2026-04-10 00:45:53.892785 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:45:53.892788 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:45:53.892792 | orchestrator | 2026-04-10 00:45:53.892796 | orchestrator | 2026-04-10 00:45:53.892800 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:45:53.892803 | orchestrator | Friday 10 April 2026 00:45:27 +0000 (0:00:00.726) 0:00:47.876 ********** 2026-04-10 00:45:53.892807 | orchestrator | =============================================================================== 2026-04-10 00:45:53.892811 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 34.32s 2026-04-10 00:45:53.892816 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.55s 2026-04-10 00:45:53.892820 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.06s 2026-04-10 00:45:53.892824 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.87s 2026-04-10 00:45:53.892829 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.72s 2026-04-10 00:45:53.892837 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.45s 2026-04-10 00:45:53.892841 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.92s 2026-04-10 00:45:53.892846 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.78s 2026-04-10 00:45:53.892890 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.73s 2026-04-10 00:45:53.892895 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.64s 2026-04-10 00:45:53.892900 | orchestrator | 2026-04-10 00:45:53.892904 | orchestrator | 2026-04-10 00:45:53.892909 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-10 00:45:53.892913 | orchestrator | 2026-04-10 00:45:53.892917 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-10 00:45:53.892921 | orchestrator | Friday 10 April 2026 00:44:38 +0000 (0:00:00.424) 0:00:00.424 ********** 2026-04-10 00:45:53.892926 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-04-10 00:45:53.892930 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-04-10 00:45:53.892953 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-04-10 00:45:53.892958 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-04-10 00:45:53.892962 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-04-10 00:45:53.892967 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-04-10 00:45:53.892971 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-04-10 00:45:53.892975 | orchestrator | 2026-04-10 00:45:53.892979 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-04-10 00:45:53.892983 | orchestrator | 2026-04-10 00:45:53.892987 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-04-10 00:45:53.892991 | orchestrator | Friday 10 April 2026 00:44:40 +0000 (0:00:01.396) 0:00:01.821 ********** 2026-04-10 00:45:53.893003 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-4, testbed-node-3, testbed-node-5 2026-04-10 00:45:53.893012 | orchestrator | 2026-04-10 00:45:53.893016 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-04-10 00:45:53.893020 | orchestrator | Friday 10 April 2026 00:44:42 +0000 (0:00:02.297) 0:00:04.118 ********** 2026-04-10 00:45:53.893025 | orchestrator | ok: [testbed-manager] 2026-04-10 00:45:53.893029 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:45:53.893033 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:45:53.893037 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:45:53.893041 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:45:53.893049 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:45:53.893053 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:45:53.893057 | orchestrator | 2026-04-10 00:45:53.893062 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-04-10 00:45:53.893066 | orchestrator | Friday 10 April 2026 00:44:45 +0000 (0:00:02.930) 0:00:07.048 ********** 2026-04-10 00:45:53.893070 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:45:53.893074 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:45:53.893078 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:45:53.893082 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:45:53.893086 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:45:53.893090 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:45:53.893094 | orchestrator | ok: [testbed-manager] 2026-04-10 00:45:53.893099 | orchestrator | 2026-04-10 00:45:53.893103 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-04-10 00:45:53.893107 | orchestrator | Friday 10 April 2026 00:44:48 +0000 (0:00:03.122) 0:00:10.171 ********** 2026-04-10 00:45:53.893111 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:45:53.893116 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:45:53.893123 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:45:53.893127 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:45:53.893132 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:45:53.893136 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:45:53.893140 | orchestrator | changed: [testbed-manager] 2026-04-10 00:45:53.893144 | orchestrator | 2026-04-10 00:45:53.893148 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-04-10 00:45:53.893153 | orchestrator | Friday 10 April 2026 00:44:50 +0000 (0:00:01.967) 0:00:12.138 ********** 2026-04-10 00:45:53.893157 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:45:53.893161 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:45:53.893165 | orchestrator | changed: [testbed-manager] 2026-04-10 00:45:53.893170 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:45:53.893174 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:45:53.893178 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:45:53.893182 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:45:53.893185 | orchestrator | 2026-04-10 00:45:53.893189 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-04-10 00:45:53.893193 | orchestrator | Friday 10 April 2026 00:45:01 +0000 (0:00:11.093) 0:00:23.231 ********** 2026-04-10 00:45:53.893197 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:45:53.893201 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:45:53.893204 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:45:53.893208 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:45:53.893212 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:45:53.893215 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:45:53.893219 | orchestrator | changed: [testbed-manager] 2026-04-10 00:45:53.893223 | orchestrator | 2026-04-10 00:45:53.893227 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-04-10 00:45:53.893230 | orchestrator | Friday 10 April 2026 00:45:24 +0000 (0:00:22.494) 0:00:45.726 ********** 2026-04-10 00:45:53.893237 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:45:53.893242 | orchestrator | 2026-04-10 00:45:53.893246 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-04-10 00:45:53.893250 | orchestrator | Friday 10 April 2026 00:45:26 +0000 (0:00:01.982) 0:00:47.708 ********** 2026-04-10 00:45:53.893254 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-04-10 00:45:53.893258 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-04-10 00:45:53.893262 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-04-10 00:45:53.893265 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-04-10 00:45:53.893269 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-04-10 00:45:53.893273 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-04-10 00:45:53.893276 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-04-10 00:45:53.893280 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-04-10 00:45:53.893284 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-04-10 00:45:53.893288 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-04-10 00:45:53.893291 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-04-10 00:45:53.893295 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-04-10 00:45:53.893299 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-04-10 00:45:53.893302 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-04-10 00:45:53.893306 | orchestrator | 2026-04-10 00:45:53.893310 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-04-10 00:45:53.893314 | orchestrator | Friday 10 April 2026 00:45:32 +0000 (0:00:06.053) 0:00:53.762 ********** 2026-04-10 00:45:53.893318 | orchestrator | ok: [testbed-manager] 2026-04-10 00:45:53.893324 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:45:53.893328 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:45:53.893332 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:45:53.893336 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:45:53.893339 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:45:53.893343 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:45:53.893347 | orchestrator | 2026-04-10 00:45:53.893350 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-04-10 00:45:53.893354 | orchestrator | Friday 10 April 2026 00:45:33 +0000 (0:00:01.074) 0:00:54.837 ********** 2026-04-10 00:45:53.893358 | orchestrator | changed: [testbed-manager] 2026-04-10 00:45:53.893362 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:45:53.893365 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:45:53.893369 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:45:53.893373 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:45:53.893377 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:45:53.893380 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:45:53.893384 | orchestrator | 2026-04-10 00:45:53.893388 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-04-10 00:45:53.893443 | orchestrator | Friday 10 April 2026 00:45:34 +0000 (0:00:01.042) 0:00:55.879 ********** 2026-04-10 00:45:53.893448 | orchestrator | ok: [testbed-manager] 2026-04-10 00:45:53.893451 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:45:53.893455 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:45:53.893459 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:45:53.893463 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:45:53.893466 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:45:53.893470 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:45:53.893474 | orchestrator | 2026-04-10 00:45:53.893478 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-04-10 00:45:53.893481 | orchestrator | Friday 10 April 2026 00:45:36 +0000 (0:00:02.142) 0:00:58.022 ********** 2026-04-10 00:45:53.893485 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:45:53.893489 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:45:53.893492 | orchestrator | ok: [testbed-manager] 2026-04-10 00:45:53.893496 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:45:53.893500 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:45:53.893503 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:45:53.893507 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:45:53.893511 | orchestrator | 2026-04-10 00:45:53.893515 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-04-10 00:45:53.893518 | orchestrator | Friday 10 April 2026 00:45:38 +0000 (0:00:02.101) 0:01:00.124 ********** 2026-04-10 00:45:53.893522 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-04-10 00:45:53.893527 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:45:53.893531 | orchestrator | 2026-04-10 00:45:53.893535 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-04-10 00:45:53.893539 | orchestrator | Friday 10 April 2026 00:45:40 +0000 (0:00:01.533) 0:01:01.657 ********** 2026-04-10 00:45:53.893543 | orchestrator | changed: [testbed-manager] 2026-04-10 00:45:53.893546 | orchestrator | 2026-04-10 00:45:53.893550 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-04-10 00:45:53.893554 | orchestrator | Friday 10 April 2026 00:45:42 +0000 (0:00:01.939) 0:01:03.596 ********** 2026-04-10 00:45:53.893558 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:45:53.893561 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:45:53.893565 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:45:53.893569 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:45:53.893573 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:45:53.893576 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:45:53.893580 | orchestrator | changed: [testbed-manager] 2026-04-10 00:45:53.893587 | orchestrator | 2026-04-10 00:45:53.893590 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:45:53.893594 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:45:53.893603 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:45:53.893607 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:45:53.893611 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:45:53.893615 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:45:53.893619 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:45:53.893622 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:45:53.893626 | orchestrator | 2026-04-10 00:45:53.893630 | orchestrator | 2026-04-10 00:45:53.893634 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:45:53.893638 | orchestrator | Friday 10 April 2026 00:45:53 +0000 (0:00:11.301) 0:01:14.898 ********** 2026-04-10 00:45:53.893641 | orchestrator | =============================================================================== 2026-04-10 00:45:53.893645 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 22.49s 2026-04-10 00:45:53.893649 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.30s 2026-04-10 00:45:53.893653 | orchestrator | osism.services.netdata : Add repository -------------------------------- 11.09s 2026-04-10 00:45:53.893656 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.05s 2026-04-10 00:45:53.893660 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.12s 2026-04-10 00:45:53.893664 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.93s 2026-04-10 00:45:53.893668 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.30s 2026-04-10 00:45:53.893671 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.14s 2026-04-10 00:45:53.893675 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.10s 2026-04-10 00:45:53.893679 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.98s 2026-04-10 00:45:53.893683 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.97s 2026-04-10 00:45:53.893689 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.94s 2026-04-10 00:45:53.893693 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.53s 2026-04-10 00:45:53.893696 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.40s 2026-04-10 00:45:53.893700 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.07s 2026-04-10 00:45:53.893704 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.04s 2026-04-10 00:45:53.893708 | orchestrator | 2026-04-10 00:45:53 | INFO  | Task 94fbfb49-fbcb-4b8c-b617-d05aff94612f is in state SUCCESS 2026-04-10 00:45:53.893712 | orchestrator | 2026-04-10 00:45:53 | INFO  | Task 8e764b92-3fb1-4b35-96fe-28163d467752 is in state STARTED 2026-04-10 00:45:53.893716 | orchestrator | 2026-04-10 00:45:53 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:45:53.895280 | orchestrator | 2026-04-10 00:45:53 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:45:53.896879 | orchestrator | 2026-04-10 00:45:53 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:45:53.896997 | orchestrator | 2026-04-10 00:45:53 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:45:56.931990 | orchestrator | 2026-04-10 00:45:56 | INFO  | Task 8e764b92-3fb1-4b35-96fe-28163d467752 is in state STARTED 2026-04-10 00:45:56.937791 | orchestrator | 2026-04-10 00:45:56 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:45:56.939887 | orchestrator | 2026-04-10 00:45:56 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:45:56.940812 | orchestrator | 2026-04-10 00:45:56 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:45:56.940911 | orchestrator | 2026-04-10 00:45:56 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:45:59.976355 | orchestrator | 2026-04-10 00:45:59 | INFO  | Task 8e764b92-3fb1-4b35-96fe-28163d467752 is in state STARTED 2026-04-10 00:45:59.978218 | orchestrator | 2026-04-10 00:45:59 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:45:59.982898 | orchestrator | 2026-04-10 00:45:59 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:45:59.984510 | orchestrator | 2026-04-10 00:45:59 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:45:59.984702 | orchestrator | 2026-04-10 00:45:59 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:46:03.028644 | orchestrator | 2026-04-10 00:46:03 | INFO  | Task 8e764b92-3fb1-4b35-96fe-28163d467752 is in state SUCCESS 2026-04-10 00:46:03.029942 | orchestrator | 2026-04-10 00:46:03 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:46:03.031992 | orchestrator | 2026-04-10 00:46:03 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:46:03.035449 | orchestrator | 2026-04-10 00:46:03 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:46:03.036185 | orchestrator | 2026-04-10 00:46:03 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:46:06.077003 | orchestrator | 2026-04-10 00:46:06 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:46:06.079812 | orchestrator | 2026-04-10 00:46:06 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:46:06.081376 | orchestrator | 2026-04-10 00:46:06 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:46:06.082297 | orchestrator | 2026-04-10 00:46:06 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:46:09.121986 | orchestrator | 2026-04-10 00:46:09 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:46:09.124017 | orchestrator | 2026-04-10 00:46:09 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:46:09.126529 | orchestrator | 2026-04-10 00:46:09 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:46:09.126605 | orchestrator | 2026-04-10 00:46:09 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:46:12.159355 | orchestrator | 2026-04-10 00:46:12 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:46:12.162157 | orchestrator | 2026-04-10 00:46:12 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:46:12.165308 | orchestrator | 2026-04-10 00:46:12 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:46:12.166007 | orchestrator | 2026-04-10 00:46:12 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:46:15.210127 | orchestrator | 2026-04-10 00:46:15 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:46:15.210179 | orchestrator | 2026-04-10 00:46:15 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:46:15.210616 | orchestrator | 2026-04-10 00:46:15 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:46:15.210629 | orchestrator | 2026-04-10 00:46:15 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:46:18.254916 | orchestrator | 2026-04-10 00:46:18 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:46:18.256606 | orchestrator | 2026-04-10 00:46:18 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:46:18.258069 | orchestrator | 2026-04-10 00:46:18 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:46:18.258136 | orchestrator | 2026-04-10 00:46:18 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:46:21.328786 | orchestrator | 2026-04-10 00:46:21 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:46:21.330321 | orchestrator | 2026-04-10 00:46:21 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:46:21.331908 | orchestrator | 2026-04-10 00:46:21 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:46:21.331962 | orchestrator | 2026-04-10 00:46:21 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:46:24.379391 | orchestrator | 2026-04-10 00:46:24 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:46:24.379633 | orchestrator | 2026-04-10 00:46:24 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:46:24.382388 | orchestrator | 2026-04-10 00:46:24 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:46:24.382429 | orchestrator | 2026-04-10 00:46:24 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:46:27.419018 | orchestrator | 2026-04-10 00:46:27 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:46:27.421281 | orchestrator | 2026-04-10 00:46:27 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:46:27.428147 | orchestrator | 2026-04-10 00:46:27 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:46:27.428500 | orchestrator | 2026-04-10 00:46:27 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:46:30.469625 | orchestrator | 2026-04-10 00:46:30 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:46:30.471978 | orchestrator | 2026-04-10 00:46:30 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:46:30.474316 | orchestrator | 2026-04-10 00:46:30 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:46:30.474385 | orchestrator | 2026-04-10 00:46:30 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:46:33.508599 | orchestrator | 2026-04-10 00:46:33 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:46:33.508723 | orchestrator | 2026-04-10 00:46:33 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:46:33.510056 | orchestrator | 2026-04-10 00:46:33 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:46:33.510598 | orchestrator | 2026-04-10 00:46:33 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:46:36.555962 | orchestrator | 2026-04-10 00:46:36 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:46:36.557472 | orchestrator | 2026-04-10 00:46:36 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:46:36.558637 | orchestrator | 2026-04-10 00:46:36 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:46:36.559041 | orchestrator | 2026-04-10 00:46:36 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:46:39.598723 | orchestrator | 2026-04-10 00:46:39 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:46:39.599045 | orchestrator | 2026-04-10 00:46:39 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:46:39.600007 | orchestrator | 2026-04-10 00:46:39 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:46:39.600037 | orchestrator | 2026-04-10 00:46:39 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:46:42.636688 | orchestrator | 2026-04-10 00:46:42 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:46:42.637701 | orchestrator | 2026-04-10 00:46:42 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:46:42.639049 | orchestrator | 2026-04-10 00:46:42 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:46:42.639477 | orchestrator | 2026-04-10 00:46:42 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:46:45.675630 | orchestrator | 2026-04-10 00:46:45 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:46:45.677234 | orchestrator | 2026-04-10 00:46:45 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:46:45.678641 | orchestrator | 2026-04-10 00:46:45 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:46:45.678692 | orchestrator | 2026-04-10 00:46:45 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:46:48.711967 | orchestrator | 2026-04-10 00:46:48 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:46:48.713580 | orchestrator | 2026-04-10 00:46:48 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:46:48.714857 | orchestrator | 2026-04-10 00:46:48 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:46:48.714931 | orchestrator | 2026-04-10 00:46:48 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:46:51.806164 | orchestrator | 2026-04-10 00:46:51 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:46:51.808748 | orchestrator | 2026-04-10 00:46:51 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:46:51.809454 | orchestrator | 2026-04-10 00:46:51 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:46:51.812242 | orchestrator | 2026-04-10 00:46:51 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:46:54.842242 | orchestrator | 2026-04-10 00:46:54 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:46:54.843397 | orchestrator | 2026-04-10 00:46:54 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:46:54.847196 | orchestrator | 2026-04-10 00:46:54 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:46:54.847264 | orchestrator | 2026-04-10 00:46:54 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:46:57.871678 | orchestrator | 2026-04-10 00:46:57 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:46:57.872743 | orchestrator | 2026-04-10 00:46:57 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:46:57.874220 | orchestrator | 2026-04-10 00:46:57 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:46:57.875620 | orchestrator | 2026-04-10 00:46:57 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:47:00.926567 | orchestrator | 2026-04-10 00:47:00 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:47:00.926619 | orchestrator | 2026-04-10 00:47:00 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:47:00.927674 | orchestrator | 2026-04-10 00:47:00 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:47:00.927845 | orchestrator | 2026-04-10 00:47:00 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:47:03.979906 | orchestrator | 2026-04-10 00:47:03 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:47:03.981370 | orchestrator | 2026-04-10 00:47:03 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state STARTED 2026-04-10 00:47:03.983356 | orchestrator | 2026-04-10 00:47:03 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:47:03.983533 | orchestrator | 2026-04-10 00:47:03 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:47:07.029879 | orchestrator | 2026-04-10 00:47:07 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:47:07.035016 | orchestrator | 2026-04-10 00:47:07 | INFO  | Task 2d35b11c-0a8b-49a1-8455-5ee386870ea4 is in state SUCCESS 2026-04-10 00:47:07.038466 | orchestrator | 2026-04-10 00:47:07.038575 | orchestrator | 2026-04-10 00:47:07.038583 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-04-10 00:47:07.038590 | orchestrator | 2026-04-10 00:47:07.038620 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-04-10 00:47:07.038627 | orchestrator | Friday 10 April 2026 00:44:56 +0000 (0:00:00.351) 0:00:00.351 ********** 2026-04-10 00:47:07.038632 | orchestrator | ok: [testbed-manager] 2026-04-10 00:47:07.038638 | orchestrator | 2026-04-10 00:47:07.038644 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-04-10 00:47:07.038649 | orchestrator | Friday 10 April 2026 00:44:58 +0000 (0:00:01.902) 0:00:02.253 ********** 2026-04-10 00:47:07.038655 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-04-10 00:47:07.038660 | orchestrator | 2026-04-10 00:47:07.038666 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-04-10 00:47:07.038671 | orchestrator | Friday 10 April 2026 00:44:59 +0000 (0:00:01.034) 0:00:03.288 ********** 2026-04-10 00:47:07.038676 | orchestrator | changed: [testbed-manager] 2026-04-10 00:47:07.038682 | orchestrator | 2026-04-10 00:47:07.038687 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-04-10 00:47:07.038693 | orchestrator | Friday 10 April 2026 00:45:00 +0000 (0:00:01.275) 0:00:04.564 ********** 2026-04-10 00:47:07.038698 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-04-10 00:47:07.038705 | orchestrator | ok: [testbed-manager] 2026-04-10 00:47:07.038735 | orchestrator | 2026-04-10 00:47:07.038741 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-04-10 00:47:07.038746 | orchestrator | Friday 10 April 2026 00:45:57 +0000 (0:00:56.360) 0:01:00.924 ********** 2026-04-10 00:47:07.038752 | orchestrator | changed: [testbed-manager] 2026-04-10 00:47:07.038757 | orchestrator | 2026-04-10 00:47:07.038762 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:47:07.038768 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:47:07.038787 | orchestrator | 2026-04-10 00:47:07.038792 | orchestrator | 2026-04-10 00:47:07.038797 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:47:07.038802 | orchestrator | Friday 10 April 2026 00:46:00 +0000 (0:00:03.023) 0:01:03.947 ********** 2026-04-10 00:47:07.038807 | orchestrator | =============================================================================== 2026-04-10 00:47:07.038812 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 56.36s 2026-04-10 00:47:07.038817 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.02s 2026-04-10 00:47:07.038823 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.90s 2026-04-10 00:47:07.038828 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.28s 2026-04-10 00:47:07.038833 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 1.03s 2026-04-10 00:47:07.038839 | orchestrator | 2026-04-10 00:47:07.038844 | orchestrator | 2026-04-10 00:47:07.038849 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-04-10 00:47:07.038854 | orchestrator | 2026-04-10 00:47:07.038859 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-10 00:47:07.038865 | orchestrator | Friday 10 April 2026 00:44:33 +0000 (0:00:00.312) 0:00:00.312 ********** 2026-04-10 00:47:07.038871 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:47:07.038876 | orchestrator | 2026-04-10 00:47:07.038882 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-04-10 00:47:07.038887 | orchestrator | Friday 10 April 2026 00:44:35 +0000 (0:00:01.432) 0:00:01.744 ********** 2026-04-10 00:47:07.038892 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-10 00:47:07.038897 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-10 00:47:07.038902 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-10 00:47:07.038907 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-10 00:47:07.038926 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-10 00:47:07.038932 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-10 00:47:07.038937 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-10 00:47:07.038942 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-10 00:47:07.038947 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-10 00:47:07.038952 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-10 00:47:07.038957 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-10 00:47:07.038963 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-10 00:47:07.038968 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-10 00:47:07.038974 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-10 00:47:07.038979 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-10 00:47:07.038985 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-10 00:47:07.039001 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-10 00:47:07.039007 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-10 00:47:07.039012 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-10 00:47:07.039022 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-10 00:47:07.039027 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-10 00:47:07.039032 | orchestrator | 2026-04-10 00:47:07.039037 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-10 00:47:07.039043 | orchestrator | Friday 10 April 2026 00:44:39 +0000 (0:00:04.011) 0:00:05.756 ********** 2026-04-10 00:47:07.039055 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:47:07.039061 | orchestrator | 2026-04-10 00:47:07.039067 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-04-10 00:47:07.039072 | orchestrator | Friday 10 April 2026 00:44:40 +0000 (0:00:01.250) 0:00:07.007 ********** 2026-04-10 00:47:07.039080 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-10 00:47:07.039087 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-10 00:47:07.039095 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-10 00:47:07.039102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-10 00:47:07.039107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-10 00:47:07.039113 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-10 00:47:07.039126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-10 00:47:07.039132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.039138 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.039146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.039150 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.039155 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.039164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.039183 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.039190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.039203 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.039209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.039217 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.039223 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.039229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.039235 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.039245 | orchestrator | 2026-04-10 00:47:07.039251 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-04-10 00:47:07.039257 | orchestrator | Friday 10 April 2026 00:44:46 +0000 (0:00:05.515) 0:00:12.522 ********** 2026-04-10 00:47:07.039268 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-10 00:47:07.039274 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:47:07.039280 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:47:07.039286 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:47:07.039296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-10 00:47:07.039304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:47:07.039310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:47:07.039316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-10 00:47:07.039325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:47:07.039336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:47:07.039342 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:47:07.039347 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:47:07.039353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-10 00:47:07.039359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:47:07.039364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:47:07.039369 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:47:07.039378 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-10 00:47:07.039384 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:47:07.039393 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:47:07.039399 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:47:07.039404 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-10 00:47:07.039512 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:47:07.039526 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:47:07.039533 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:47:07.039538 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-10 00:47:07.039544 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:47:07.039553 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:47:07.039564 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:47:07.039570 | orchestrator | 2026-04-10 00:47:07.039575 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-04-10 00:47:07.039581 | orchestrator | Friday 10 April 2026 00:44:48 +0000 (0:00:02.494) 0:00:15.016 ********** 2026-04-10 00:47:07.039586 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-10 00:47:07.039592 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:47:07.039602 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:47:07.039609 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:47:07.039615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-10 00:47:07.039621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-10 00:47:07.039626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:47:07.039632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:47:07.039641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:47:07.039647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:47:07.039652 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:47:07.039658 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:47:07.039663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-10 00:47:07.039673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:47:07.039680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:47:07.039686 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-10 00:47:07.039696 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:47:07.039709 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:47:07.039719 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:47:07.039724 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:47:07.039730 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-10 00:47:07.039736 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:47:07.039745 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:47:07.039751 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:47:07.039756 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-10 00:47:07.039762 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:47:07.039767 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:47:07.039776 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:47:07.039782 | orchestrator | 2026-04-10 00:47:07.039787 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-04-10 00:47:07.039793 | orchestrator | Friday 10 April 2026 00:44:51 +0000 (0:00:02.942) 0:00:17.959 ********** 2026-04-10 00:47:07.039798 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:47:07.039803 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:47:07.039809 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:47:07.039815 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:47:07.039821 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:47:07.039826 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:47:07.039831 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:47:07.039836 | orchestrator | 2026-04-10 00:47:07.039841 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-04-10 00:47:07.039850 | orchestrator | Friday 10 April 2026 00:44:53 +0000 (0:00:01.932) 0:00:19.892 ********** 2026-04-10 00:47:07.039856 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:47:07.039861 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:47:07.039866 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:47:07.039871 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:47:07.039876 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:47:07.039881 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:47:07.039886 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:47:07.039891 | orchestrator | 2026-04-10 00:47:07.039896 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-04-10 00:47:07.039901 | orchestrator | Friday 10 April 2026 00:44:54 +0000 (0:00:01.317) 0:00:21.210 ********** 2026-04-10 00:47:07.039907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-10 00:47:07.039913 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-10 00:47:07.039925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-10 00:47:07.039931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-10 00:47:07.039937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.039949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.039957 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-10 00:47:07.039962 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.039968 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-10 00:47:07.039973 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-10 00:47:07.039982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.039988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.039998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.040004 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.040016 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.040021 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.040027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.040039 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.040045 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.040055 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.040061 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.040067 | orchestrator | 2026-04-10 00:47:07.040072 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-04-10 00:47:07.040077 | orchestrator | Friday 10 April 2026 00:45:01 +0000 (0:00:06.528) 0:00:27.739 ********** 2026-04-10 00:47:07.040082 | orchestrator | [WARNING]: Skipped 2026-04-10 00:47:07.040088 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-04-10 00:47:07.040094 | orchestrator | to this access issue: 2026-04-10 00:47:07.040099 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-04-10 00:47:07.040104 | orchestrator | directory 2026-04-10 00:47:07.040110 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-10 00:47:07.040115 | orchestrator | 2026-04-10 00:47:07.040120 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-04-10 00:47:07.040125 | orchestrator | Friday 10 April 2026 00:45:02 +0000 (0:00:01.161) 0:00:28.900 ********** 2026-04-10 00:47:07.040130 | orchestrator | [WARNING]: Skipped 2026-04-10 00:47:07.040136 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-04-10 00:47:07.040141 | orchestrator | to this access issue: 2026-04-10 00:47:07.040146 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-04-10 00:47:07.040152 | orchestrator | directory 2026-04-10 00:47:07.040157 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-10 00:47:07.040162 | orchestrator | 2026-04-10 00:47:07.040171 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-04-10 00:47:07.040176 | orchestrator | Friday 10 April 2026 00:45:03 +0000 (0:00:00.695) 0:00:29.595 ********** 2026-04-10 00:47:07.040181 | orchestrator | [WARNING]: Skipped 2026-04-10 00:47:07.040186 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-04-10 00:47:07.040192 | orchestrator | to this access issue: 2026-04-10 00:47:07.040197 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-04-10 00:47:07.040202 | orchestrator | directory 2026-04-10 00:47:07.040208 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-10 00:47:07.040214 | orchestrator | 2026-04-10 00:47:07.040219 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-04-10 00:47:07.040224 | orchestrator | Friday 10 April 2026 00:45:03 +0000 (0:00:00.678) 0:00:30.274 ********** 2026-04-10 00:47:07.040229 | orchestrator | [WARNING]: Skipped 2026-04-10 00:47:07.040234 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-04-10 00:47:07.040240 | orchestrator | to this access issue: 2026-04-10 00:47:07.040246 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-04-10 00:47:07.040251 | orchestrator | directory 2026-04-10 00:47:07.040256 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-10 00:47:07.040261 | orchestrator | 2026-04-10 00:47:07.040267 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-04-10 00:47:07.040272 | orchestrator | Friday 10 April 2026 00:45:05 +0000 (0:00:01.458) 0:00:31.732 ********** 2026-04-10 00:47:07.040282 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:47:07.040287 | orchestrator | changed: [testbed-manager] 2026-04-10 00:47:07.040293 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:47:07.040298 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:47:07.040304 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:47:07.040309 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:47:07.040314 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:47:07.040319 | orchestrator | 2026-04-10 00:47:07.040325 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-04-10 00:47:07.040330 | orchestrator | Friday 10 April 2026 00:45:10 +0000 (0:00:04.832) 0:00:36.565 ********** 2026-04-10 00:47:07.040335 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-10 00:47:07.040341 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-10 00:47:07.040347 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-10 00:47:07.040356 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-10 00:47:07.040361 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-10 00:47:07.040366 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-10 00:47:07.040371 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-10 00:47:07.040377 | orchestrator | 2026-04-10 00:47:07.040382 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-04-10 00:47:07.040387 | orchestrator | Friday 10 April 2026 00:45:13 +0000 (0:00:03.350) 0:00:39.916 ********** 2026-04-10 00:47:07.040392 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:47:07.040397 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:47:07.040402 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:47:07.040408 | orchestrator | changed: [testbed-manager] 2026-04-10 00:47:07.040430 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:47:07.040435 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:47:07.040440 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:47:07.040445 | orchestrator | 2026-04-10 00:47:07.040453 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-04-10 00:47:07.040461 | orchestrator | Friday 10 April 2026 00:45:16 +0000 (0:00:03.132) 0:00:43.049 ********** 2026-04-10 00:47:07.040466 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-10 00:47:07.040472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:47:07.040481 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.040493 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-10 00:47:07.040498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:47:07.040508 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-10 00:47:07.040513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:47:07.040517 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.040522 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-10 00:47:07.040526 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:47:07.040537 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.040543 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.040549 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-10 00:47:07.040557 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:47:07.040562 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-10 00:47:07.040567 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:47:07.040572 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.040580 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.040592 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-10 00:47:07.040598 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:47:07.040603 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.040608 | orchestrator | 2026-04-10 00:47:07.040613 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-04-10 00:47:07.040619 | orchestrator | Friday 10 April 2026 00:45:20 +0000 (0:00:03.755) 0:00:46.804 ********** 2026-04-10 00:47:07.040624 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-10 00:47:07.040629 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-10 00:47:07.040635 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-10 00:47:07.040786 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-10 00:47:07.040800 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-10 00:47:07.040807 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-10 00:47:07.040812 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-10 00:47:07.040816 | orchestrator | 2026-04-10 00:47:07.040821 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-04-10 00:47:07.040826 | orchestrator | Friday 10 April 2026 00:45:23 +0000 (0:00:03.061) 0:00:49.866 ********** 2026-04-10 00:47:07.040831 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-10 00:47:07.040836 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-10 00:47:07.040841 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-10 00:47:07.040846 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-10 00:47:07.040851 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-10 00:47:07.040856 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-10 00:47:07.040860 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-10 00:47:07.040865 | orchestrator | 2026-04-10 00:47:07.040876 | orchestrator | TASK [common : Check common containers] **************************************** 2026-04-10 00:47:07.040881 | orchestrator | Friday 10 April 2026 00:45:26 +0000 (0:00:03.118) 0:00:52.984 ********** 2026-04-10 00:47:07.040888 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-10 00:47:07.040894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-10 00:47:07.040903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-10 00:47:07.040910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-10 00:47:07.040914 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.040921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.040925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.040935 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-10 00:47:07.040938 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-10 00:47:07.040944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.040947 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-10 00:47:07.040950 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.040957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.040961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.040964 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.040970 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.040975 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.040979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.040982 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.040985 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.040988 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:47:07.040992 | orchestrator | 2026-04-10 00:47:07.040997 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-04-10 00:47:07.041002 | orchestrator | Friday 10 April 2026 00:45:31 +0000 (0:00:04.697) 0:00:57.682 ********** 2026-04-10 00:47:07.041008 | orchestrator | changed: [testbed-manager] 2026-04-10 00:47:07.041013 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:47:07.041018 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:47:07.041023 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:47:07.041028 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:47:07.041033 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:47:07.041041 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:47:07.041046 | orchestrator | 2026-04-10 00:47:07.041051 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-04-10 00:47:07.041057 | orchestrator | Friday 10 April 2026 00:45:32 +0000 (0:00:01.791) 0:00:59.473 ********** 2026-04-10 00:47:07.041062 | orchestrator | changed: [testbed-manager] 2026-04-10 00:47:07.041067 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:47:07.041072 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:47:07.041077 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:47:07.041082 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:47:07.041087 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:47:07.041092 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:47:07.041097 | orchestrator | 2026-04-10 00:47:07.041103 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-10 00:47:07.041108 | orchestrator | Friday 10 April 2026 00:45:34 +0000 (0:00:01.900) 0:01:01.373 ********** 2026-04-10 00:47:07.041113 | orchestrator | 2026-04-10 00:47:07.041118 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-10 00:47:07.041123 | orchestrator | Friday 10 April 2026 00:45:34 +0000 (0:00:00.098) 0:01:01.472 ********** 2026-04-10 00:47:07.041128 | orchestrator | 2026-04-10 00:47:07.041133 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-10 00:47:07.041138 | orchestrator | Friday 10 April 2026 00:45:35 +0000 (0:00:00.102) 0:01:01.574 ********** 2026-04-10 00:47:07.041143 | orchestrator | 2026-04-10 00:47:07.041148 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-10 00:47:07.041154 | orchestrator | Friday 10 April 2026 00:45:35 +0000 (0:00:00.076) 0:01:01.651 ********** 2026-04-10 00:47:07.041159 | orchestrator | 2026-04-10 00:47:07.041164 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-10 00:47:07.041169 | orchestrator | Friday 10 April 2026 00:45:35 +0000 (0:00:00.074) 0:01:01.725 ********** 2026-04-10 00:47:07.041174 | orchestrator | 2026-04-10 00:47:07.041180 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-10 00:47:07.041183 | orchestrator | Friday 10 April 2026 00:45:35 +0000 (0:00:00.065) 0:01:01.791 ********** 2026-04-10 00:47:07.041186 | orchestrator | 2026-04-10 00:47:07.041190 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-10 00:47:07.041195 | orchestrator | Friday 10 April 2026 00:45:35 +0000 (0:00:00.078) 0:01:01.870 ********** 2026-04-10 00:47:07.041200 | orchestrator | 2026-04-10 00:47:07.041205 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-04-10 00:47:07.041210 | orchestrator | Friday 10 April 2026 00:45:35 +0000 (0:00:00.100) 0:01:01.970 ********** 2026-04-10 00:47:07.041213 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:47:07.041216 | orchestrator | changed: [testbed-manager] 2026-04-10 00:47:07.041219 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:47:07.041222 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:47:07.041228 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:47:07.041231 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:47:07.041234 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:47:07.041237 | orchestrator | 2026-04-10 00:47:07.041241 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-04-10 00:47:07.041244 | orchestrator | Friday 10 April 2026 00:46:06 +0000 (0:00:30.994) 0:01:32.964 ********** 2026-04-10 00:47:07.041247 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:47:07.041250 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:47:07.041253 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:47:07.041256 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:47:07.041259 | orchestrator | changed: [testbed-manager] 2026-04-10 00:47:07.041263 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:47:07.041268 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:47:07.041274 | orchestrator | 2026-04-10 00:47:07.041277 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-04-10 00:47:07.041284 | orchestrator | Friday 10 April 2026 00:46:53 +0000 (0:00:46.778) 0:02:19.742 ********** 2026-04-10 00:47:07.041287 | orchestrator | ok: [testbed-manager] 2026-04-10 00:47:07.041290 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:47:07.041294 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:47:07.041297 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:47:07.041303 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:47:07.041308 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:47:07.041313 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:47:07.041318 | orchestrator | 2026-04-10 00:47:07.041323 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-04-10 00:47:07.041328 | orchestrator | Friday 10 April 2026 00:46:55 +0000 (0:00:02.367) 0:02:22.110 ********** 2026-04-10 00:47:07.041333 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:47:07.041338 | orchestrator | changed: [testbed-manager] 2026-04-10 00:47:07.041343 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:47:07.041349 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:47:07.041354 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:47:07.041359 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:47:07.041364 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:47:07.041369 | orchestrator | 2026-04-10 00:47:07.041374 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:47:07.041380 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-10 00:47:07.041386 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-10 00:47:07.041395 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-10 00:47:07.041401 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-10 00:47:07.041407 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-10 00:47:07.041426 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-10 00:47:07.041431 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-10 00:47:07.041436 | orchestrator | 2026-04-10 00:47:07.041441 | orchestrator | 2026-04-10 00:47:07.041447 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:47:07.041451 | orchestrator | Friday 10 April 2026 00:47:05 +0000 (0:00:10.280) 0:02:32.391 ********** 2026-04-10 00:47:07.041457 | orchestrator | =============================================================================== 2026-04-10 00:47:07.041461 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 46.78s 2026-04-10 00:47:07.041466 | orchestrator | common : Restart fluentd container ------------------------------------- 30.99s 2026-04-10 00:47:07.041471 | orchestrator | common : Restart cron container ---------------------------------------- 10.28s 2026-04-10 00:47:07.041477 | orchestrator | common : Copying over config.json files for services -------------------- 6.53s 2026-04-10 00:47:07.041482 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.52s 2026-04-10 00:47:07.041487 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.83s 2026-04-10 00:47:07.041493 | orchestrator | common : Check common containers ---------------------------------------- 4.70s 2026-04-10 00:47:07.041498 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.01s 2026-04-10 00:47:07.041503 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.76s 2026-04-10 00:47:07.041513 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.35s 2026-04-10 00:47:07.041519 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.13s 2026-04-10 00:47:07.041524 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.12s 2026-04-10 00:47:07.041530 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.06s 2026-04-10 00:47:07.041536 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.94s 2026-04-10 00:47:07.041542 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.49s 2026-04-10 00:47:07.041547 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.37s 2026-04-10 00:47:07.041555 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.93s 2026-04-10 00:47:07.041560 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.90s 2026-04-10 00:47:07.041565 | orchestrator | common : Creating log volume -------------------------------------------- 1.79s 2026-04-10 00:47:07.041570 | orchestrator | common : Find custom fluentd output config files ------------------------ 1.46s 2026-04-10 00:47:07.041575 | orchestrator | 2026-04-10 00:47:07 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:47:07.041581 | orchestrator | 2026-04-10 00:47:07 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:47:10.073484 | orchestrator | 2026-04-10 00:47:10 | INFO  | Task 9162fe99-6841-4542-8363-1fa7dfcf23f9 is in state STARTED 2026-04-10 00:47:10.074756 | orchestrator | 2026-04-10 00:47:10 | INFO  | Task 8143de70-72c2-4ca4-82b4-fcc29462807f is in state STARTED 2026-04-10 00:47:10.074807 | orchestrator | 2026-04-10 00:47:10 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:47:10.075478 | orchestrator | 2026-04-10 00:47:10 | INFO  | Task 4f80de6e-4e07-406c-9d09-6c0334b7d8a3 is in state STARTED 2026-04-10 00:47:10.076156 | orchestrator | 2026-04-10 00:47:10 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:47:10.077067 | orchestrator | 2026-04-10 00:47:10 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:47:10.077175 | orchestrator | 2026-04-10 00:47:10 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:47:13.103693 | orchestrator | 2026-04-10 00:47:13 | INFO  | Task 9162fe99-6841-4542-8363-1fa7dfcf23f9 is in state STARTED 2026-04-10 00:47:13.104130 | orchestrator | 2026-04-10 00:47:13 | INFO  | Task 8143de70-72c2-4ca4-82b4-fcc29462807f is in state STARTED 2026-04-10 00:47:13.104792 | orchestrator | 2026-04-10 00:47:13 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:47:13.105354 | orchestrator | 2026-04-10 00:47:13 | INFO  | Task 4f80de6e-4e07-406c-9d09-6c0334b7d8a3 is in state STARTED 2026-04-10 00:47:13.106111 | orchestrator | 2026-04-10 00:47:13 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:47:13.106766 | orchestrator | 2026-04-10 00:47:13 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:47:13.106800 | orchestrator | 2026-04-10 00:47:13 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:47:16.137403 | orchestrator | 2026-04-10 00:47:16 | INFO  | Task 9162fe99-6841-4542-8363-1fa7dfcf23f9 is in state STARTED 2026-04-10 00:47:16.137906 | orchestrator | 2026-04-10 00:47:16 | INFO  | Task 8143de70-72c2-4ca4-82b4-fcc29462807f is in state STARTED 2026-04-10 00:47:16.138606 | orchestrator | 2026-04-10 00:47:16 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:47:16.140493 | orchestrator | 2026-04-10 00:47:16 | INFO  | Task 4f80de6e-4e07-406c-9d09-6c0334b7d8a3 is in state STARTED 2026-04-10 00:47:16.140952 | orchestrator | 2026-04-10 00:47:16 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:47:16.141626 | orchestrator | 2026-04-10 00:47:16 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:47:16.141716 | orchestrator | 2026-04-10 00:47:16 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:47:19.175176 | orchestrator | 2026-04-10 00:47:19 | INFO  | Task 9162fe99-6841-4542-8363-1fa7dfcf23f9 is in state STARTED 2026-04-10 00:47:19.175914 | orchestrator | 2026-04-10 00:47:19 | INFO  | Task 8143de70-72c2-4ca4-82b4-fcc29462807f is in state STARTED 2026-04-10 00:47:19.177481 | orchestrator | 2026-04-10 00:47:19 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:47:19.179036 | orchestrator | 2026-04-10 00:47:19 | INFO  | Task 4f80de6e-4e07-406c-9d09-6c0334b7d8a3 is in state STARTED 2026-04-10 00:47:19.180521 | orchestrator | 2026-04-10 00:47:19 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:47:19.181625 | orchestrator | 2026-04-10 00:47:19 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:47:19.181661 | orchestrator | 2026-04-10 00:47:19 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:47:22.206201 | orchestrator | 2026-04-10 00:47:22 | INFO  | Task 9162fe99-6841-4542-8363-1fa7dfcf23f9 is in state STARTED 2026-04-10 00:47:22.206546 | orchestrator | 2026-04-10 00:47:22 | INFO  | Task 8143de70-72c2-4ca4-82b4-fcc29462807f is in state STARTED 2026-04-10 00:47:22.207243 | orchestrator | 2026-04-10 00:47:22 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:47:22.207958 | orchestrator | 2026-04-10 00:47:22 | INFO  | Task 4f80de6e-4e07-406c-9d09-6c0334b7d8a3 is in state STARTED 2026-04-10 00:47:22.208600 | orchestrator | 2026-04-10 00:47:22 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:47:22.210128 | orchestrator | 2026-04-10 00:47:22 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:47:22.210159 | orchestrator | 2026-04-10 00:47:22 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:47:25.245921 | orchestrator | 2026-04-10 00:47:25 | INFO  | Task 9162fe99-6841-4542-8363-1fa7dfcf23f9 is in state STARTED 2026-04-10 00:47:25.246348 | orchestrator | 2026-04-10 00:47:25 | INFO  | Task 8143de70-72c2-4ca4-82b4-fcc29462807f is in state STARTED 2026-04-10 00:47:25.249295 | orchestrator | 2026-04-10 00:47:25 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:47:25.249753 | orchestrator | 2026-04-10 00:47:25 | INFO  | Task 4f80de6e-4e07-406c-9d09-6c0334b7d8a3 is in state STARTED 2026-04-10 00:47:25.250401 | orchestrator | 2026-04-10 00:47:25 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:47:25.253369 | orchestrator | 2026-04-10 00:47:25 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:47:25.253411 | orchestrator | 2026-04-10 00:47:25 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:47:28.286332 | orchestrator | 2026-04-10 00:47:28 | INFO  | Task 9162fe99-6841-4542-8363-1fa7dfcf23f9 is in state STARTED 2026-04-10 00:47:28.286565 | orchestrator | 2026-04-10 00:47:28 | INFO  | Task 8143de70-72c2-4ca4-82b4-fcc29462807f is in state STARTED 2026-04-10 00:47:28.287541 | orchestrator | 2026-04-10 00:47:28 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:47:28.287586 | orchestrator | 2026-04-10 00:47:28 | INFO  | Task 4f80de6e-4e07-406c-9d09-6c0334b7d8a3 is in state SUCCESS 2026-04-10 00:47:28.289694 | orchestrator | 2026-04-10 00:47:28 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:47:28.289742 | orchestrator | 2026-04-10 00:47:28 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:47:28.290591 | orchestrator | 2026-04-10 00:47:28 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:47:28.290621 | orchestrator | 2026-04-10 00:47:28 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:47:31.358916 | orchestrator | 2026-04-10 00:47:31 | INFO  | Task 9162fe99-6841-4542-8363-1fa7dfcf23f9 is in state STARTED 2026-04-10 00:47:31.361167 | orchestrator | 2026-04-10 00:47:31 | INFO  | Task 8143de70-72c2-4ca4-82b4-fcc29462807f is in state STARTED 2026-04-10 00:47:31.361847 | orchestrator | 2026-04-10 00:47:31 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:47:31.362663 | orchestrator | 2026-04-10 00:47:31 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:47:31.363400 | orchestrator | 2026-04-10 00:47:31 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:47:31.364098 | orchestrator | 2026-04-10 00:47:31 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:47:31.364219 | orchestrator | 2026-04-10 00:47:31 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:47:34.402229 | orchestrator | 2026-04-10 00:47:34 | INFO  | Task 9162fe99-6841-4542-8363-1fa7dfcf23f9 is in state STARTED 2026-04-10 00:47:34.402375 | orchestrator | 2026-04-10 00:47:34 | INFO  | Task 8143de70-72c2-4ca4-82b4-fcc29462807f is in state STARTED 2026-04-10 00:47:34.406337 | orchestrator | 2026-04-10 00:47:34 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:47:34.406690 | orchestrator | 2026-04-10 00:47:34 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:47:34.407507 | orchestrator | 2026-04-10 00:47:34 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:47:34.408295 | orchestrator | 2026-04-10 00:47:34 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:47:34.408330 | orchestrator | 2026-04-10 00:47:34 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:47:37.467889 | orchestrator | 2026-04-10 00:47:37 | INFO  | Task 9162fe99-6841-4542-8363-1fa7dfcf23f9 is in state STARTED 2026-04-10 00:47:37.468828 | orchestrator | 2026-04-10 00:47:37 | INFO  | Task 8143de70-72c2-4ca4-82b4-fcc29462807f is in state SUCCESS 2026-04-10 00:47:37.469358 | orchestrator | 2026-04-10 00:47:37.469409 | orchestrator | 2026-04-10 00:47:37.469457 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-10 00:47:37.469476 | orchestrator | 2026-04-10 00:47:37.469490 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-10 00:47:37.469505 | orchestrator | Friday 10 April 2026 00:47:09 +0000 (0:00:00.298) 0:00:00.298 ********** 2026-04-10 00:47:37.469521 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:47:37.469538 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:47:37.469552 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:47:37.469567 | orchestrator | 2026-04-10 00:47:37.469583 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-10 00:47:37.469597 | orchestrator | Friday 10 April 2026 00:47:10 +0000 (0:00:00.296) 0:00:00.595 ********** 2026-04-10 00:47:37.469612 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-04-10 00:47:37.469628 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-04-10 00:47:37.469641 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-04-10 00:47:37.469657 | orchestrator | 2026-04-10 00:47:37.469973 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-04-10 00:47:37.470003 | orchestrator | 2026-04-10 00:47:37.470079 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-04-10 00:47:37.470099 | orchestrator | Friday 10 April 2026 00:47:10 +0000 (0:00:00.373) 0:00:00.969 ********** 2026-04-10 00:47:37.470114 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:47:37.470130 | orchestrator | 2026-04-10 00:47:37.470145 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-04-10 00:47:37.470160 | orchestrator | Friday 10 April 2026 00:47:11 +0000 (0:00:00.514) 0:00:01.483 ********** 2026-04-10 00:47:37.470175 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-10 00:47:37.470190 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-10 00:47:37.470204 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-10 00:47:37.470218 | orchestrator | 2026-04-10 00:47:37.470230 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-04-10 00:47:37.470243 | orchestrator | Friday 10 April 2026 00:47:12 +0000 (0:00:01.286) 0:00:02.769 ********** 2026-04-10 00:47:37.470255 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-10 00:47:37.470270 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-10 00:47:37.470285 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-10 00:47:37.470300 | orchestrator | 2026-04-10 00:47:37.470314 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-04-10 00:47:37.470329 | orchestrator | Friday 10 April 2026 00:47:13 +0000 (0:00:01.581) 0:00:04.351 ********** 2026-04-10 00:47:37.470343 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:47:37.470358 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:47:37.470374 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:47:37.470389 | orchestrator | 2026-04-10 00:47:37.470404 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-04-10 00:47:37.470419 | orchestrator | Friday 10 April 2026 00:47:15 +0000 (0:00:01.756) 0:00:06.107 ********** 2026-04-10 00:47:37.470493 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:47:37.470508 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:47:37.470522 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:47:37.470536 | orchestrator | 2026-04-10 00:47:37.470550 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:47:37.470565 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:47:37.470583 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:47:37.470599 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:47:37.470614 | orchestrator | 2026-04-10 00:47:37.470629 | orchestrator | 2026-04-10 00:47:37.470643 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:47:37.470659 | orchestrator | Friday 10 April 2026 00:47:25 +0000 (0:00:09.576) 0:00:15.684 ********** 2026-04-10 00:47:37.470676 | orchestrator | =============================================================================== 2026-04-10 00:47:37.470691 | orchestrator | memcached : Restart memcached container --------------------------------- 9.58s 2026-04-10 00:47:37.470706 | orchestrator | memcached : Check memcached container ----------------------------------- 1.76s 2026-04-10 00:47:37.470722 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.58s 2026-04-10 00:47:37.470737 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.29s 2026-04-10 00:47:37.470751 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.51s 2026-04-10 00:47:37.470768 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.37s 2026-04-10 00:47:37.470806 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2026-04-10 00:47:37.470822 | orchestrator | 2026-04-10 00:47:37.470853 | orchestrator | 2026-04-10 00:47:37.470868 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-10 00:47:37.470884 | orchestrator | 2026-04-10 00:47:37.470899 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-10 00:47:37.471080 | orchestrator | Friday 10 April 2026 00:47:10 +0000 (0:00:00.462) 0:00:00.462 ********** 2026-04-10 00:47:37.471100 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:47:37.471113 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:47:37.471126 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:47:37.471140 | orchestrator | 2026-04-10 00:47:37.471153 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-10 00:47:37.471181 | orchestrator | Friday 10 April 2026 00:47:10 +0000 (0:00:00.379) 0:00:00.842 ********** 2026-04-10 00:47:37.471195 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-04-10 00:47:37.471209 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-04-10 00:47:37.471223 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-04-10 00:47:37.471237 | orchestrator | 2026-04-10 00:47:37.471250 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-04-10 00:47:37.471263 | orchestrator | 2026-04-10 00:47:37.471276 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-04-10 00:47:37.471288 | orchestrator | Friday 10 April 2026 00:47:11 +0000 (0:00:00.366) 0:00:01.208 ********** 2026-04-10 00:47:37.471301 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:47:37.471315 | orchestrator | 2026-04-10 00:47:37.471329 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-04-10 00:47:37.471338 | orchestrator | Friday 10 April 2026 00:47:11 +0000 (0:00:00.575) 0:00:01.783 ********** 2026-04-10 00:47:37.471349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-10 00:47:37.471362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-10 00:47:37.471370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-10 00:47:37.471379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-10 00:47:37.471411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-10 00:47:37.471567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-10 00:47:37.471601 | orchestrator | 2026-04-10 00:47:37.471610 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-04-10 00:47:37.471618 | orchestrator | Friday 10 April 2026 00:47:13 +0000 (0:00:01.857) 0:00:03.641 ********** 2026-04-10 00:47:37.471627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-10 00:47:37.471636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-10 00:47:37.471644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-10 00:47:37.471653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-10 00:47:37.471671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-10 00:47:37.471692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-10 00:47:37.471701 | orchestrator | 2026-04-10 00:47:37.471711 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-04-10 00:47:37.471721 | orchestrator | Friday 10 April 2026 00:47:15 +0000 (0:00:02.439) 0:00:06.081 ********** 2026-04-10 00:47:37.471730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-10 00:47:37.471758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-10 00:47:37.471768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-10 00:47:37.471778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-10 00:47:37.471794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-10 00:47:37.471811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-10 00:47:37.471820 | orchestrator | 2026-04-10 00:47:37.471842 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-04-10 00:47:37.471851 | orchestrator | Friday 10 April 2026 00:47:18 +0000 (0:00:02.413) 0:00:08.495 ********** 2026-04-10 00:47:37.471861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-10 00:47:37.471871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-10 00:47:37.471880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-10 00:47:37.471889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-10 00:47:37.471905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-10 00:47:37.471919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-10 00:47:37.471928 | orchestrator | 2026-04-10 00:47:37.471936 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-10 00:47:37.471944 | orchestrator | Friday 10 April 2026 00:47:19 +0000 (0:00:01.431) 0:00:09.927 ********** 2026-04-10 00:47:37.471952 | orchestrator | 2026-04-10 00:47:37.471960 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-10 00:47:37.471972 | orchestrator | Friday 10 April 2026 00:47:20 +0000 (0:00:00.236) 0:00:10.164 ********** 2026-04-10 00:47:37.471980 | orchestrator | 2026-04-10 00:47:37.471988 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-10 00:47:37.471996 | orchestrator | Friday 10 April 2026 00:47:20 +0000 (0:00:00.060) 0:00:10.224 ********** 2026-04-10 00:47:37.472003 | orchestrator | 2026-04-10 00:47:37.472011 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-04-10 00:47:37.472019 | orchestrator | Friday 10 April 2026 00:47:20 +0000 (0:00:00.080) 0:00:10.305 ********** 2026-04-10 00:47:37.472027 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:47:37.472036 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:47:37.472044 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:47:37.472052 | orchestrator | 2026-04-10 00:47:37.472060 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-04-10 00:47:37.472068 | orchestrator | Friday 10 April 2026 00:47:30 +0000 (0:00:10.019) 0:00:20.324 ********** 2026-04-10 00:47:37.472076 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:47:37.472084 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:47:37.472091 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:47:37.472099 | orchestrator | 2026-04-10 00:47:37.472107 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:47:37.472116 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:47:37.472125 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:47:37.472138 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:47:37.472146 | orchestrator | 2026-04-10 00:47:37.472154 | orchestrator | 2026-04-10 00:47:37.472162 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:47:37.472170 | orchestrator | Friday 10 April 2026 00:47:33 +0000 (0:00:03.749) 0:00:24.074 ********** 2026-04-10 00:47:37.472178 | orchestrator | =============================================================================== 2026-04-10 00:47:37.472186 | orchestrator | redis : Restart redis container ---------------------------------------- 10.02s 2026-04-10 00:47:37.472194 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 3.75s 2026-04-10 00:47:37.472202 | orchestrator | redis : Copying over default config.json files -------------------------- 2.44s 2026-04-10 00:47:37.472210 | orchestrator | redis : Copying over redis config files --------------------------------- 2.41s 2026-04-10 00:47:37.472217 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.86s 2026-04-10 00:47:37.472225 | orchestrator | redis : Check redis containers ------------------------------------------ 1.43s 2026-04-10 00:47:37.472233 | orchestrator | redis : include_tasks --------------------------------------------------- 0.58s 2026-04-10 00:47:37.472241 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.38s 2026-04-10 00:47:37.472249 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.38s 2026-04-10 00:47:37.472257 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.37s 2026-04-10 00:47:37.472265 | orchestrator | 2026-04-10 00:47:37 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:47:37.472420 | orchestrator | 2026-04-10 00:47:37 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:47:37.473754 | orchestrator | 2026-04-10 00:47:37 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:47:37.475377 | orchestrator | 2026-04-10 00:47:37 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:47:37.475462 | orchestrator | 2026-04-10 00:47:37 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:47:40.510263 | orchestrator | 2026-04-10 00:47:40 | INFO  | Task 9162fe99-6841-4542-8363-1fa7dfcf23f9 is in state STARTED 2026-04-10 00:47:40.512482 | orchestrator | 2026-04-10 00:47:40 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:47:40.513637 | orchestrator | 2026-04-10 00:47:40 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:47:40.513671 | orchestrator | 2026-04-10 00:47:40 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:47:40.516492 | orchestrator | 2026-04-10 00:47:40 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:47:40.516543 | orchestrator | 2026-04-10 00:47:40 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:47:43.659717 | orchestrator | 2026-04-10 00:47:43 | INFO  | Task 9162fe99-6841-4542-8363-1fa7dfcf23f9 is in state STARTED 2026-04-10 00:47:43.659789 | orchestrator | 2026-04-10 00:47:43 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:47:43.659796 | orchestrator | 2026-04-10 00:47:43 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:47:43.659816 | orchestrator | 2026-04-10 00:47:43 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:47:43.659821 | orchestrator | 2026-04-10 00:47:43 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:47:43.659826 | orchestrator | 2026-04-10 00:47:43 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:47:46.667846 | orchestrator | 2026-04-10 00:47:46 | INFO  | Task 9162fe99-6841-4542-8363-1fa7dfcf23f9 is in state STARTED 2026-04-10 00:47:46.669968 | orchestrator | 2026-04-10 00:47:46 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:47:46.670117 | orchestrator | 2026-04-10 00:47:46 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:47:46.671860 | orchestrator | 2026-04-10 00:47:46 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:47:46.673333 | orchestrator | 2026-04-10 00:47:46 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:47:46.673384 | orchestrator | 2026-04-10 00:47:46 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:47:49.702504 | orchestrator | 2026-04-10 00:47:49 | INFO  | Task 9162fe99-6841-4542-8363-1fa7dfcf23f9 is in state STARTED 2026-04-10 00:47:49.706193 | orchestrator | 2026-04-10 00:47:49 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:47:49.706268 | orchestrator | 2026-04-10 00:47:49 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:47:49.706274 | orchestrator | 2026-04-10 00:47:49 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:47:49.706287 | orchestrator | 2026-04-10 00:47:49 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:47:49.706293 | orchestrator | 2026-04-10 00:47:49 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:47:52.752921 | orchestrator | 2026-04-10 00:47:52 | INFO  | Task 9162fe99-6841-4542-8363-1fa7dfcf23f9 is in state STARTED 2026-04-10 00:47:52.753115 | orchestrator | 2026-04-10 00:47:52 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:47:52.754127 | orchestrator | 2026-04-10 00:47:52 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:47:52.754712 | orchestrator | 2026-04-10 00:47:52 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:47:52.755375 | orchestrator | 2026-04-10 00:47:52 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:47:52.755411 | orchestrator | 2026-04-10 00:47:52 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:47:55.794678 | orchestrator | 2026-04-10 00:47:55 | INFO  | Task 9162fe99-6841-4542-8363-1fa7dfcf23f9 is in state STARTED 2026-04-10 00:47:55.794731 | orchestrator | 2026-04-10 00:47:55 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:47:55.796588 | orchestrator | 2026-04-10 00:47:55 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:47:55.796622 | orchestrator | 2026-04-10 00:47:55 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:47:55.796629 | orchestrator | 2026-04-10 00:47:55 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:47:55.796636 | orchestrator | 2026-04-10 00:47:55 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:47:58.825723 | orchestrator | 2026-04-10 00:47:58 | INFO  | Task 9162fe99-6841-4542-8363-1fa7dfcf23f9 is in state STARTED 2026-04-10 00:47:58.826280 | orchestrator | 2026-04-10 00:47:58 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:47:58.826921 | orchestrator | 2026-04-10 00:47:58 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:47:58.827588 | orchestrator | 2026-04-10 00:47:58 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:47:58.829257 | orchestrator | 2026-04-10 00:47:58 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:47:58.829289 | orchestrator | 2026-04-10 00:47:58 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:48:01.863901 | orchestrator | 2026-04-10 00:48:01 | INFO  | Task 9162fe99-6841-4542-8363-1fa7dfcf23f9 is in state STARTED 2026-04-10 00:48:01.863964 | orchestrator | 2026-04-10 00:48:01 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:48:01.863970 | orchestrator | 2026-04-10 00:48:01 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:48:01.864829 | orchestrator | 2026-04-10 00:48:01 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:48:01.866580 | orchestrator | 2026-04-10 00:48:01 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:48:01.866614 | orchestrator | 2026-04-10 00:48:01 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:48:04.891883 | orchestrator | 2026-04-10 00:48:04 | INFO  | Task 9162fe99-6841-4542-8363-1fa7dfcf23f9 is in state STARTED 2026-04-10 00:48:04.893688 | orchestrator | 2026-04-10 00:48:04 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:48:04.893740 | orchestrator | 2026-04-10 00:48:04 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:48:04.893952 | orchestrator | 2026-04-10 00:48:04 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:48:04.895895 | orchestrator | 2026-04-10 00:48:04 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:48:04.895944 | orchestrator | 2026-04-10 00:48:04 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:48:07.929494 | orchestrator | 2026-04-10 00:48:07 | INFO  | Task 9162fe99-6841-4542-8363-1fa7dfcf23f9 is in state STARTED 2026-04-10 00:48:07.929764 | orchestrator | 2026-04-10 00:48:07 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:48:07.930369 | orchestrator | 2026-04-10 00:48:07 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:48:07.932835 | orchestrator | 2026-04-10 00:48:07 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:48:07.933702 | orchestrator | 2026-04-10 00:48:07 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:48:07.933746 | orchestrator | 2026-04-10 00:48:07 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:48:10.970990 | orchestrator | 2026-04-10 00:48:10 | INFO  | Task 9162fe99-6841-4542-8363-1fa7dfcf23f9 is in state STARTED 2026-04-10 00:48:10.971535 | orchestrator | 2026-04-10 00:48:10 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:48:10.972404 | orchestrator | 2026-04-10 00:48:10 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:48:10.973362 | orchestrator | 2026-04-10 00:48:10 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:48:10.976322 | orchestrator | 2026-04-10 00:48:10 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:48:10.976369 | orchestrator | 2026-04-10 00:48:10 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:48:14.008236 | orchestrator | 2026-04-10 00:48:14 | INFO  | Task 9162fe99-6841-4542-8363-1fa7dfcf23f9 is in state STARTED 2026-04-10 00:48:14.009805 | orchestrator | 2026-04-10 00:48:14 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:48:14.011119 | orchestrator | 2026-04-10 00:48:14 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:48:14.012410 | orchestrator | 2026-04-10 00:48:14 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:48:14.014700 | orchestrator | 2026-04-10 00:48:14 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:48:14.014771 | orchestrator | 2026-04-10 00:48:14 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:48:17.128060 | orchestrator | 2026-04-10 00:48:17 | INFO  | Task 9162fe99-6841-4542-8363-1fa7dfcf23f9 is in state SUCCESS 2026-04-10 00:48:17.128901 | orchestrator | 2026-04-10 00:48:17.128934 | orchestrator | 2026-04-10 00:48:17.128941 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-10 00:48:17.128947 | orchestrator | 2026-04-10 00:48:17.128953 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-10 00:48:17.128960 | orchestrator | Friday 10 April 2026 00:47:09 +0000 (0:00:00.292) 0:00:00.292 ********** 2026-04-10 00:48:17.128966 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:48:17.128973 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:48:17.128977 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:48:17.128982 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:48:17.128987 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:48:17.128992 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:48:17.128998 | orchestrator | 2026-04-10 00:48:17.129002 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-10 00:48:17.129005 | orchestrator | Friday 10 April 2026 00:47:10 +0000 (0:00:00.607) 0:00:00.899 ********** 2026-04-10 00:48:17.129009 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-10 00:48:17.129013 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-10 00:48:17.129016 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-10 00:48:17.129028 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-10 00:48:17.129031 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-10 00:48:17.129035 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-10 00:48:17.129038 | orchestrator | 2026-04-10 00:48:17.129041 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-04-10 00:48:17.129044 | orchestrator | 2026-04-10 00:48:17.129048 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-04-10 00:48:17.129053 | orchestrator | Friday 10 April 2026 00:47:10 +0000 (0:00:00.757) 0:00:01.657 ********** 2026-04-10 00:48:17.129060 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:48:17.129065 | orchestrator | 2026-04-10 00:48:17.129068 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-10 00:48:17.129071 | orchestrator | Friday 10 April 2026 00:47:12 +0000 (0:00:01.199) 0:00:02.856 ********** 2026-04-10 00:48:17.129075 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-10 00:48:17.129078 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-10 00:48:17.129082 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-10 00:48:17.129085 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-10 00:48:17.129088 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-10 00:48:17.129091 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-10 00:48:17.129097 | orchestrator | 2026-04-10 00:48:17.129102 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-10 00:48:17.129107 | orchestrator | Friday 10 April 2026 00:47:13 +0000 (0:00:01.726) 0:00:04.582 ********** 2026-04-10 00:48:17.129125 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-10 00:48:17.129130 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-10 00:48:17.129136 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-10 00:48:17.129140 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-10 00:48:17.129143 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-10 00:48:17.129146 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-10 00:48:17.129149 | orchestrator | 2026-04-10 00:48:17.129152 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-10 00:48:17.129155 | orchestrator | Friday 10 April 2026 00:47:15 +0000 (0:00:01.667) 0:00:06.249 ********** 2026-04-10 00:48:17.129160 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-04-10 00:48:17.129165 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:48:17.129171 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-04-10 00:48:17.129174 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:48:17.129177 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-04-10 00:48:17.129180 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:48:17.129183 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-04-10 00:48:17.129186 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:48:17.129189 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-04-10 00:48:17.129194 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:48:17.129198 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-04-10 00:48:17.129204 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:48:17.129208 | orchestrator | 2026-04-10 00:48:17.129213 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-04-10 00:48:17.129219 | orchestrator | Friday 10 April 2026 00:47:16 +0000 (0:00:01.162) 0:00:07.412 ********** 2026-04-10 00:48:17.129224 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:48:17.129274 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:48:17.129279 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:48:17.129283 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:48:17.129286 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:48:17.129289 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:48:17.129292 | orchestrator | 2026-04-10 00:48:17.129295 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-04-10 00:48:17.129300 | orchestrator | Friday 10 April 2026 00:47:17 +0000 (0:00:00.713) 0:00:08.125 ********** 2026-04-10 00:48:17.129318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-10 00:48:17.129328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-10 00:48:17.129337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-10 00:48:17.129340 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-10 00:48:17.129343 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-10 00:48:17.129347 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-10 00:48:17.129353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-10 00:48:17.129358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-10 00:48:17.129364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-10 00:48:17.129368 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-10 00:48:17.129371 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-10 00:48:17.129376 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-10 00:48:17.129379 | orchestrator | 2026-04-10 00:48:17.129383 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-04-10 00:48:17.129386 | orchestrator | Friday 10 April 2026 00:47:19 +0000 (0:00:01.642) 0:00:09.767 ********** 2026-04-10 00:48:17.129391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-10 00:48:17.129397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-10 00:48:17.129400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-10 00:48:17.129403 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-10 00:48:17.129407 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-10 00:48:17.129417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-10 00:48:17.129422 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-10 00:48:17.129428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-10 00:48:17.129431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-10 00:48:17.129434 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-10 00:48:17.129438 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-10 00:48:17.129489 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-10 00:48:17.129496 | orchestrator | 2026-04-10 00:48:17.129501 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-04-10 00:48:17.129510 | orchestrator | Friday 10 April 2026 00:47:21 +0000 (0:00:02.606) 0:00:12.373 ********** 2026-04-10 00:48:17.129513 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:48:17.129516 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:48:17.129519 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:48:17.129522 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:48:17.129525 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:48:17.129528 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:48:17.129533 | orchestrator | 2026-04-10 00:48:17.129538 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-04-10 00:48:17.129542 | orchestrator | Friday 10 April 2026 00:47:22 +0000 (0:00:00.883) 0:00:13.257 ********** 2026-04-10 00:48:17.129549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-10 00:48:17.129556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-10 00:48:17.129564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-10 00:48:17.129573 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-10 00:48:17.129582 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-10 00:48:17.129601 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-10 00:48:17.129606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-10 00:48:17.129611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-10 00:48:17.129616 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-10 00:48:17.129621 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-10 00:48:17.129629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-10 00:48:17.129642 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-10 00:48:17.129648 | orchestrator | 2026-04-10 00:48:17.129652 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-10 00:48:17.129657 | orchestrator | Friday 10 April 2026 00:47:24 +0000 (0:00:01.921) 0:00:15.179 ********** 2026-04-10 00:48:17.129661 | orchestrator | 2026-04-10 00:48:17.129680 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-10 00:48:17.129685 | orchestrator | Friday 10 April 2026 00:47:24 +0000 (0:00:00.169) 0:00:15.349 ********** 2026-04-10 00:48:17.129690 | orchestrator | 2026-04-10 00:48:17.129696 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-10 00:48:17.129701 | orchestrator | Friday 10 April 2026 00:47:24 +0000 (0:00:00.135) 0:00:15.484 ********** 2026-04-10 00:48:17.129706 | orchestrator | 2026-04-10 00:48:17.129710 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-10 00:48:17.129713 | orchestrator | Friday 10 April 2026 00:47:24 +0000 (0:00:00.135) 0:00:15.620 ********** 2026-04-10 00:48:17.129716 | orchestrator | 2026-04-10 00:48:17.129719 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-10 00:48:17.129724 | orchestrator | Friday 10 April 2026 00:47:25 +0000 (0:00:00.283) 0:00:15.903 ********** 2026-04-10 00:48:17.129729 | orchestrator | 2026-04-10 00:48:17.129733 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-10 00:48:17.129736 | orchestrator | Friday 10 April 2026 00:47:25 +0000 (0:00:00.130) 0:00:16.033 ********** 2026-04-10 00:48:17.129739 | orchestrator | 2026-04-10 00:48:17.129742 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-04-10 00:48:17.129745 | orchestrator | Friday 10 April 2026 00:47:25 +0000 (0:00:00.161) 0:00:16.195 ********** 2026-04-10 00:48:17.129748 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:48:17.129751 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:48:17.129756 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:48:17.129761 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:48:17.129767 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:48:17.129774 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:48:17.129780 | orchestrator | 2026-04-10 00:48:17.129785 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-04-10 00:48:17.129789 | orchestrator | Friday 10 April 2026 00:47:39 +0000 (0:00:13.811) 0:00:30.007 ********** 2026-04-10 00:48:17.129793 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:48:17.129799 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:48:17.129803 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:48:17.129808 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:48:17.129813 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:48:17.129823 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:48:17.129828 | orchestrator | 2026-04-10 00:48:17.129833 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-10 00:48:17.129837 | orchestrator | Friday 10 April 2026 00:47:41 +0000 (0:00:02.138) 0:00:32.145 ********** 2026-04-10 00:48:17.129842 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:48:17.129846 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:48:17.129851 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:48:17.129857 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:48:17.129861 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:48:17.129864 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:48:17.129867 | orchestrator | 2026-04-10 00:48:17.129870 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-04-10 00:48:17.129873 | orchestrator | Friday 10 April 2026 00:47:51 +0000 (0:00:10.033) 0:00:42.178 ********** 2026-04-10 00:48:17.129877 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-04-10 00:48:17.129880 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-04-10 00:48:17.129883 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-04-10 00:48:17.129887 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-04-10 00:48:17.129893 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-04-10 00:48:17.129899 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-04-10 00:48:17.129902 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-04-10 00:48:17.129906 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-04-10 00:48:17.129909 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-04-10 00:48:17.129912 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-04-10 00:48:17.129915 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-04-10 00:48:17.129918 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-04-10 00:48:17.129924 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-10 00:48:17.129928 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-10 00:48:17.129932 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-10 00:48:17.129935 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-10 00:48:17.129939 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-10 00:48:17.129942 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-10 00:48:17.129946 | orchestrator | 2026-04-10 00:48:17.129950 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-04-10 00:48:17.129953 | orchestrator | Friday 10 April 2026 00:48:00 +0000 (0:00:08.736) 0:00:50.915 ********** 2026-04-10 00:48:17.129958 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-04-10 00:48:17.129963 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:48:17.129967 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-04-10 00:48:17.129973 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:48:17.129977 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-04-10 00:48:17.129980 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:48:17.129984 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-04-10 00:48:17.129988 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-04-10 00:48:17.129991 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-04-10 00:48:17.129995 | orchestrator | 2026-04-10 00:48:17.129998 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-04-10 00:48:17.130002 | orchestrator | Friday 10 April 2026 00:48:02 +0000 (0:00:02.636) 0:00:53.551 ********** 2026-04-10 00:48:17.130006 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-04-10 00:48:17.130009 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:48:17.130052 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-04-10 00:48:17.130056 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:48:17.130060 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-04-10 00:48:17.130063 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:48:17.130067 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-04-10 00:48:17.130071 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-04-10 00:48:17.130075 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-04-10 00:48:17.130080 | orchestrator | 2026-04-10 00:48:17.130085 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-10 00:48:17.130089 | orchestrator | Friday 10 April 2026 00:48:06 +0000 (0:00:03.370) 0:00:56.922 ********** 2026-04-10 00:48:17.130094 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:48:17.130099 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:48:17.130104 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:48:17.130110 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:48:17.130115 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:48:17.130120 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:48:17.130125 | orchestrator | 2026-04-10 00:48:17.130131 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:48:17.130136 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-10 00:48:17.130142 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-10 00:48:17.130147 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-10 00:48:17.130152 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-10 00:48:17.130158 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-10 00:48:17.130166 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-10 00:48:17.130172 | orchestrator | 2026-04-10 00:48:17.130177 | orchestrator | 2026-04-10 00:48:17.130180 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:48:17.130184 | orchestrator | Friday 10 April 2026 00:48:13 +0000 (0:00:07.524) 0:01:04.446 ********** 2026-04-10 00:48:17.130187 | orchestrator | =============================================================================== 2026-04-10 00:48:17.130191 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.56s 2026-04-10 00:48:17.130194 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 13.81s 2026-04-10 00:48:17.130198 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.74s 2026-04-10 00:48:17.130205 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.37s 2026-04-10 00:48:17.130209 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.64s 2026-04-10 00:48:17.130212 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.61s 2026-04-10 00:48:17.130218 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.14s 2026-04-10 00:48:17.130222 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 1.92s 2026-04-10 00:48:17.130225 | orchestrator | module-load : Load modules ---------------------------------------------- 1.73s 2026-04-10 00:48:17.130229 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.67s 2026-04-10 00:48:17.130304 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.64s 2026-04-10 00:48:17.130313 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.20s 2026-04-10 00:48:17.130316 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.16s 2026-04-10 00:48:17.130319 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.02s 2026-04-10 00:48:17.130323 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.88s 2026-04-10 00:48:17.130326 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.76s 2026-04-10 00:48:17.130329 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.71s 2026-04-10 00:48:17.130332 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.61s 2026-04-10 00:48:17.130335 | orchestrator | 2026-04-10 00:48:17 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:48:17.130340 | orchestrator | 2026-04-10 00:48:17 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:48:17.133144 | orchestrator | 2026-04-10 00:48:17 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:48:17.133598 | orchestrator | 2026-04-10 00:48:17 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:48:17.133979 | orchestrator | 2026-04-10 00:48:17 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:48:17.134205 | orchestrator | 2026-04-10 00:48:17 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:48:20.161932 | orchestrator | 2026-04-10 00:48:20 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:48:20.162175 | orchestrator | 2026-04-10 00:48:20 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:48:20.162838 | orchestrator | 2026-04-10 00:48:20 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:48:20.163515 | orchestrator | 2026-04-10 00:48:20 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:48:20.164175 | orchestrator | 2026-04-10 00:48:20 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:48:20.164207 | orchestrator | 2026-04-10 00:48:20 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:48:23.194571 | orchestrator | 2026-04-10 00:48:23 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:48:23.194932 | orchestrator | 2026-04-10 00:48:23 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:48:23.195632 | orchestrator | 2026-04-10 00:48:23 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:48:23.196213 | orchestrator | 2026-04-10 00:48:23 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:48:23.196918 | orchestrator | 2026-04-10 00:48:23 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:48:23.197206 | orchestrator | 2026-04-10 00:48:23 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:48:26.222373 | orchestrator | 2026-04-10 00:48:26 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:48:26.222867 | orchestrator | 2026-04-10 00:48:26 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:48:26.223900 | orchestrator | 2026-04-10 00:48:26 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:48:26.224536 | orchestrator | 2026-04-10 00:48:26 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:48:26.225485 | orchestrator | 2026-04-10 00:48:26 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:48:26.225511 | orchestrator | 2026-04-10 00:48:26 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:48:29.266613 | orchestrator | 2026-04-10 00:48:29 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:48:29.267526 | orchestrator | 2026-04-10 00:48:29 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:48:29.269213 | orchestrator | 2026-04-10 00:48:29 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:48:29.270900 | orchestrator | 2026-04-10 00:48:29 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:48:29.272398 | orchestrator | 2026-04-10 00:48:29 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:48:29.272442 | orchestrator | 2026-04-10 00:48:29 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:48:32.398826 | orchestrator | 2026-04-10 00:48:32 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:48:32.398889 | orchestrator | 2026-04-10 00:48:32 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:48:32.401798 | orchestrator | 2026-04-10 00:48:32 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:48:32.402726 | orchestrator | 2026-04-10 00:48:32 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:48:32.405965 | orchestrator | 2026-04-10 00:48:32 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:48:32.406054 | orchestrator | 2026-04-10 00:48:32 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:48:35.473890 | orchestrator | 2026-04-10 00:48:35 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:48:35.475631 | orchestrator | 2026-04-10 00:48:35 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:48:35.476698 | orchestrator | 2026-04-10 00:48:35 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:48:35.478812 | orchestrator | 2026-04-10 00:48:35 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:48:35.479122 | orchestrator | 2026-04-10 00:48:35 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:48:35.479300 | orchestrator | 2026-04-10 00:48:35 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:48:38.558181 | orchestrator | 2026-04-10 00:48:38 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:48:38.558613 | orchestrator | 2026-04-10 00:48:38 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:48:38.562586 | orchestrator | 2026-04-10 00:48:38 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:48:38.563423 | orchestrator | 2026-04-10 00:48:38 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:48:38.566283 | orchestrator | 2026-04-10 00:48:38 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:48:38.566342 | orchestrator | 2026-04-10 00:48:38 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:48:41.607652 | orchestrator | 2026-04-10 00:48:41 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:48:41.609408 | orchestrator | 2026-04-10 00:48:41 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:48:41.611261 | orchestrator | 2026-04-10 00:48:41 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:48:41.613223 | orchestrator | 2026-04-10 00:48:41 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:48:41.614158 | orchestrator | 2026-04-10 00:48:41 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:48:41.614751 | orchestrator | 2026-04-10 00:48:41 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:48:44.655319 | orchestrator | 2026-04-10 00:48:44 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:48:44.655857 | orchestrator | 2026-04-10 00:48:44 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:48:44.656917 | orchestrator | 2026-04-10 00:48:44 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:48:44.657526 | orchestrator | 2026-04-10 00:48:44 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:48:44.659023 | orchestrator | 2026-04-10 00:48:44 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:48:44.659055 | orchestrator | 2026-04-10 00:48:44 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:48:47.697448 | orchestrator | 2026-04-10 00:48:47 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:48:47.697665 | orchestrator | 2026-04-10 00:48:47 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:48:47.699423 | orchestrator | 2026-04-10 00:48:47 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:48:47.699873 | orchestrator | 2026-04-10 00:48:47 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:48:47.701014 | orchestrator | 2026-04-10 00:48:47 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:48:47.701066 | orchestrator | 2026-04-10 00:48:47 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:48:50.728312 | orchestrator | 2026-04-10 00:48:50 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:48:50.729283 | orchestrator | 2026-04-10 00:48:50 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:48:50.730345 | orchestrator | 2026-04-10 00:48:50 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:48:50.731293 | orchestrator | 2026-04-10 00:48:50 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:48:50.732784 | orchestrator | 2026-04-10 00:48:50 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:48:50.732823 | orchestrator | 2026-04-10 00:48:50 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:48:53.799042 | orchestrator | 2026-04-10 00:48:53 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:48:53.800130 | orchestrator | 2026-04-10 00:48:53 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:48:53.802385 | orchestrator | 2026-04-10 00:48:53 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:48:53.803507 | orchestrator | 2026-04-10 00:48:53 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:48:53.805668 | orchestrator | 2026-04-10 00:48:53 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:48:53.805741 | orchestrator | 2026-04-10 00:48:53 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:48:57.001188 | orchestrator | 2026-04-10 00:48:56 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:48:57.001281 | orchestrator | 2026-04-10 00:48:56 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:48:57.001289 | orchestrator | 2026-04-10 00:48:56 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:48:57.001296 | orchestrator | 2026-04-10 00:48:56 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:48:57.001304 | orchestrator | 2026-04-10 00:48:56 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:48:57.001311 | orchestrator | 2026-04-10 00:48:56 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:49:00.146845 | orchestrator | 2026-04-10 00:49:00 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:49:00.146932 | orchestrator | 2026-04-10 00:49:00 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:49:00.146943 | orchestrator | 2026-04-10 00:49:00 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:49:00.146949 | orchestrator | 2026-04-10 00:49:00 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:49:00.146956 | orchestrator | 2026-04-10 00:49:00 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:49:00.146963 | orchestrator | 2026-04-10 00:49:00 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:49:03.318288 | orchestrator | 2026-04-10 00:49:03 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:49:03.318629 | orchestrator | 2026-04-10 00:49:03 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:49:03.319273 | orchestrator | 2026-04-10 00:49:03 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:49:03.320046 | orchestrator | 2026-04-10 00:49:03 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:49:03.320543 | orchestrator | 2026-04-10 00:49:03 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:49:03.320570 | orchestrator | 2026-04-10 00:49:03 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:49:06.350599 | orchestrator | 2026-04-10 00:49:06 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:49:06.350789 | orchestrator | 2026-04-10 00:49:06 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:49:06.351709 | orchestrator | 2026-04-10 00:49:06 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:49:06.352331 | orchestrator | 2026-04-10 00:49:06 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:49:06.353000 | orchestrator | 2026-04-10 00:49:06 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:49:06.353044 | orchestrator | 2026-04-10 00:49:06 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:49:09.387889 | orchestrator | 2026-04-10 00:49:09 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:49:09.388007 | orchestrator | 2026-04-10 00:49:09 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:49:09.388082 | orchestrator | 2026-04-10 00:49:09 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:49:09.388455 | orchestrator | 2026-04-10 00:49:09 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state STARTED 2026-04-10 00:49:09.389115 | orchestrator | 2026-04-10 00:49:09 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:49:09.389146 | orchestrator | 2026-04-10 00:49:09 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:49:12.423176 | orchestrator | 2026-04-10 00:49:12 | INFO  | Task e8b44b08-fca7-489c-ad6c-cffa7b4acfa6 is in state STARTED 2026-04-10 00:49:12.424171 | orchestrator | 2026-04-10 00:49:12 | INFO  | Task 8b78ff2f-62f8-4625-bd57-5883ecb6852a is in state STARTED 2026-04-10 00:49:12.424970 | orchestrator | 2026-04-10 00:49:12 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:49:12.426311 | orchestrator | 2026-04-10 00:49:12 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:49:12.426360 | orchestrator | 2026-04-10 00:49:12 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:49:12.428608 | orchestrator | 2026-04-10 00:49:12 | INFO  | Task 0abe025a-d819-42e0-936f-c54c7e30d890 is in state SUCCESS 2026-04-10 00:49:12.429511 | orchestrator | 2026-04-10 00:49:12.429559 | orchestrator | 2026-04-10 00:49:12.429570 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-04-10 00:49:12.429599 | orchestrator | 2026-04-10 00:49:12.429638 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-04-10 00:49:12.429646 | orchestrator | Friday 10 April 2026 00:44:33 +0000 (0:00:00.253) 0:00:00.253 ********** 2026-04-10 00:49:12.429661 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:49:12.429669 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:49:12.429674 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:49:12.429683 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:49:12.429688 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:49:12.429692 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:49:12.429695 | orchestrator | 2026-04-10 00:49:12.429700 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-04-10 00:49:12.429704 | orchestrator | Friday 10 April 2026 00:44:34 +0000 (0:00:00.588) 0:00:00.842 ********** 2026-04-10 00:49:12.429710 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:49:12.429718 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:49:12.429724 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:49:12.429731 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:49:12.429737 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:49:12.429743 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:49:12.429749 | orchestrator | 2026-04-10 00:49:12.429755 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-04-10 00:49:12.429762 | orchestrator | Friday 10 April 2026 00:44:35 +0000 (0:00:00.711) 0:00:01.553 ********** 2026-04-10 00:49:12.429768 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:49:12.429775 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:49:12.429782 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:49:12.429788 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:49:12.429794 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:49:12.429801 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:49:12.429807 | orchestrator | 2026-04-10 00:49:12.429815 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-04-10 00:49:12.429846 | orchestrator | Friday 10 April 2026 00:44:35 +0000 (0:00:00.476) 0:00:02.030 ********** 2026-04-10 00:49:12.429852 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:49:12.429859 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:49:12.429865 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:49:12.429871 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:49:12.429877 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:49:12.429906 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:49:12.429914 | orchestrator | 2026-04-10 00:49:12.429918 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-04-10 00:49:12.429922 | orchestrator | Friday 10 April 2026 00:44:38 +0000 (0:00:02.266) 0:00:04.296 ********** 2026-04-10 00:49:12.429926 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:49:12.429930 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:49:12.429936 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:49:12.429942 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:49:12.429948 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:49:12.429955 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:49:12.429964 | orchestrator | 2026-04-10 00:49:12.429984 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-04-10 00:49:12.429992 | orchestrator | Friday 10 April 2026 00:44:39 +0000 (0:00:01.310) 0:00:05.606 ********** 2026-04-10 00:49:12.430053 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:49:12.430063 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:49:12.430069 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:49:12.430075 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:49:12.430081 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:49:12.430088 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:49:12.430094 | orchestrator | 2026-04-10 00:49:12.430101 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-04-10 00:49:12.430107 | orchestrator | Friday 10 April 2026 00:44:40 +0000 (0:00:01.031) 0:00:06.637 ********** 2026-04-10 00:49:12.430114 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:49:12.430120 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:49:12.430127 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:49:12.430133 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:49:12.430140 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:49:12.430146 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:49:12.430153 | orchestrator | 2026-04-10 00:49:12.430159 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-04-10 00:49:12.430164 | orchestrator | Friday 10 April 2026 00:44:41 +0000 (0:00:00.793) 0:00:07.430 ********** 2026-04-10 00:49:12.430168 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:49:12.430173 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:49:12.430179 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:49:12.430185 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:49:12.430192 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:49:12.430198 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:49:12.430204 | orchestrator | 2026-04-10 00:49:12.430210 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-04-10 00:49:12.430216 | orchestrator | Friday 10 April 2026 00:44:42 +0000 (0:00:00.919) 0:00:08.350 ********** 2026-04-10 00:49:12.430223 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-10 00:49:12.430230 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-10 00:49:12.430236 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:49:12.430243 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-10 00:49:12.430249 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-10 00:49:12.430254 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:49:12.430258 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-10 00:49:12.430274 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-10 00:49:12.430280 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:49:12.430286 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-10 00:49:12.430308 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-10 00:49:12.430314 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:49:12.430320 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-10 00:49:12.430327 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-10 00:49:12.430332 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:49:12.430338 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-10 00:49:12.430344 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-10 00:49:12.430350 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:49:12.430356 | orchestrator | 2026-04-10 00:49:12.430362 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-04-10 00:49:12.430367 | orchestrator | Friday 10 April 2026 00:44:43 +0000 (0:00:01.277) 0:00:09.627 ********** 2026-04-10 00:49:12.430372 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:49:12.430378 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:49:12.430383 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:49:12.430388 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:49:12.430394 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:49:12.430399 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:49:12.430405 | orchestrator | 2026-04-10 00:49:12.430411 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-04-10 00:49:12.430418 | orchestrator | Friday 10 April 2026 00:44:45 +0000 (0:00:01.747) 0:00:11.375 ********** 2026-04-10 00:49:12.430423 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:49:12.430429 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:49:12.430435 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:49:12.430440 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:49:12.430446 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:49:12.430451 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:49:12.430457 | orchestrator | 2026-04-10 00:49:12.430463 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-04-10 00:49:12.430468 | orchestrator | Friday 10 April 2026 00:44:45 +0000 (0:00:00.740) 0:00:12.115 ********** 2026-04-10 00:49:12.430507 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:49:12.430512 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:49:12.430516 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:49:12.430520 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:49:12.430524 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:49:12.430528 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:49:12.430531 | orchestrator | 2026-04-10 00:49:12.430535 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-04-10 00:49:12.430539 | orchestrator | Friday 10 April 2026 00:44:51 +0000 (0:00:06.053) 0:00:18.169 ********** 2026-04-10 00:49:12.430543 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:49:12.430547 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:49:12.430550 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:49:12.430554 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:49:12.430558 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:49:12.430562 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:49:12.430566 | orchestrator | 2026-04-10 00:49:12.430570 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-04-10 00:49:12.430579 | orchestrator | Friday 10 April 2026 00:44:53 +0000 (0:00:01.917) 0:00:20.086 ********** 2026-04-10 00:49:12.430583 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:49:12.430587 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:49:12.430591 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:49:12.430601 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:49:12.430605 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:49:12.430609 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:49:12.430612 | orchestrator | 2026-04-10 00:49:12.430616 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-04-10 00:49:12.430622 | orchestrator | Friday 10 April 2026 00:44:56 +0000 (0:00:03.164) 0:00:23.251 ********** 2026-04-10 00:49:12.430625 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:49:12.430629 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:49:12.430633 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:49:12.430637 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:49:12.430640 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:49:12.430644 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:49:12.430648 | orchestrator | 2026-04-10 00:49:12.430652 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-04-10 00:49:12.430656 | orchestrator | Friday 10 April 2026 00:44:57 +0000 (0:00:00.787) 0:00:24.038 ********** 2026-04-10 00:49:12.430659 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-04-10 00:49:12.430664 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-04-10 00:49:12.430667 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:49:12.430671 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-04-10 00:49:12.430675 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-04-10 00:49:12.430679 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-04-10 00:49:12.430683 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-04-10 00:49:12.430687 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:49:12.430690 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-04-10 00:49:12.430694 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-04-10 00:49:12.430698 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:49:12.430702 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-04-10 00:49:12.430706 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-04-10 00:49:12.430710 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:49:12.430714 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:49:12.430717 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-04-10 00:49:12.430721 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-04-10 00:49:12.430725 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:49:12.430730 | orchestrator | 2026-04-10 00:49:12.430733 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-04-10 00:49:12.430743 | orchestrator | Friday 10 April 2026 00:44:58 +0000 (0:00:00.793) 0:00:24.832 ********** 2026-04-10 00:49:12.430749 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:49:12.430756 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:49:12.430761 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:49:12.430765 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:49:12.430768 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:49:12.430772 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:49:12.430776 | orchestrator | 2026-04-10 00:49:12.430780 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-04-10 00:49:12.430785 | orchestrator | Friday 10 April 2026 00:44:59 +0000 (0:00:00.752) 0:00:25.585 ********** 2026-04-10 00:49:12.430788 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:49:12.430792 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:49:12.430796 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:49:12.430800 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:49:12.430804 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:49:12.430807 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:49:12.430811 | orchestrator | 2026-04-10 00:49:12.430816 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-04-10 00:49:12.430823 | orchestrator | 2026-04-10 00:49:12.430827 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-04-10 00:49:12.430831 | orchestrator | Friday 10 April 2026 00:45:00 +0000 (0:00:01.317) 0:00:26.903 ********** 2026-04-10 00:49:12.430835 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:49:12.430839 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:49:12.430842 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:49:12.430846 | orchestrator | 2026-04-10 00:49:12.430850 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-04-10 00:49:12.430854 | orchestrator | Friday 10 April 2026 00:45:01 +0000 (0:00:00.890) 0:00:27.793 ********** 2026-04-10 00:49:12.430858 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:49:12.430862 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:49:12.430866 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:49:12.430869 | orchestrator | 2026-04-10 00:49:12.430873 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-04-10 00:49:12.430877 | orchestrator | Friday 10 April 2026 00:45:02 +0000 (0:00:01.397) 0:00:29.190 ********** 2026-04-10 00:49:12.430881 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:49:12.430884 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:49:12.430890 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:49:12.430896 | orchestrator | 2026-04-10 00:49:12.430902 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-04-10 00:49:12.430907 | orchestrator | Friday 10 April 2026 00:45:03 +0000 (0:00:00.827) 0:00:30.018 ********** 2026-04-10 00:49:12.430913 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:49:12.430919 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:49:12.430926 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:49:12.430932 | orchestrator | 2026-04-10 00:49:12.430938 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-04-10 00:49:12.430942 | orchestrator | Friday 10 April 2026 00:45:04 +0000 (0:00:00.836) 0:00:30.855 ********** 2026-04-10 00:49:12.430945 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:49:12.430949 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:49:12.430953 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:49:12.430956 | orchestrator | 2026-04-10 00:49:12.430960 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-04-10 00:49:12.430968 | orchestrator | Friday 10 April 2026 00:45:04 +0000 (0:00:00.369) 0:00:31.225 ********** 2026-04-10 00:49:12.430972 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:49:12.430976 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:49:12.430979 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:49:12.430983 | orchestrator | 2026-04-10 00:49:12.430987 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-04-10 00:49:12.430990 | orchestrator | Friday 10 April 2026 00:45:05 +0000 (0:00:00.817) 0:00:32.043 ********** 2026-04-10 00:49:12.430994 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:49:12.430998 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:49:12.431002 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:49:12.431006 | orchestrator | 2026-04-10 00:49:12.431010 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-04-10 00:49:12.431014 | orchestrator | Friday 10 April 2026 00:45:07 +0000 (0:00:02.113) 0:00:34.156 ********** 2026-04-10 00:49:12.431017 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:49:12.431021 | orchestrator | 2026-04-10 00:49:12.431025 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-04-10 00:49:12.431029 | orchestrator | Friday 10 April 2026 00:45:08 +0000 (0:00:00.810) 0:00:34.967 ********** 2026-04-10 00:49:12.431033 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:49:12.431036 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:49:12.431040 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:49:12.431044 | orchestrator | 2026-04-10 00:49:12.431048 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-04-10 00:49:12.431051 | orchestrator | Friday 10 April 2026 00:45:10 +0000 (0:00:01.756) 0:00:36.723 ********** 2026-04-10 00:49:12.431058 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:49:12.431062 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:49:12.431066 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:49:12.431070 | orchestrator | 2026-04-10 00:49:12.431074 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-04-10 00:49:12.431078 | orchestrator | Friday 10 April 2026 00:45:11 +0000 (0:00:00.924) 0:00:37.647 ********** 2026-04-10 00:49:12.431081 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:49:12.431085 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:49:12.431089 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:49:12.431093 | orchestrator | 2026-04-10 00:49:12.431097 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-04-10 00:49:12.431101 | orchestrator | Friday 10 April 2026 00:45:12 +0000 (0:00:01.278) 0:00:38.926 ********** 2026-04-10 00:49:12.431104 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:49:12.431108 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:49:12.431113 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:49:12.431117 | orchestrator | 2026-04-10 00:49:12.431120 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-04-10 00:49:12.431130 | orchestrator | Friday 10 April 2026 00:45:14 +0000 (0:00:02.086) 0:00:41.013 ********** 2026-04-10 00:49:12.431134 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:49:12.431138 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:49:12.431141 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:49:12.431145 | orchestrator | 2026-04-10 00:49:12.431149 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-04-10 00:49:12.431152 | orchestrator | Friday 10 April 2026 00:45:15 +0000 (0:00:00.427) 0:00:41.440 ********** 2026-04-10 00:49:12.431156 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:49:12.431160 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:49:12.431165 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:49:12.431169 | orchestrator | 2026-04-10 00:49:12.431173 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-04-10 00:49:12.431176 | orchestrator | Friday 10 April 2026 00:45:15 +0000 (0:00:00.552) 0:00:41.993 ********** 2026-04-10 00:49:12.431180 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:49:12.431184 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:49:12.431188 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:49:12.431192 | orchestrator | 2026-04-10 00:49:12.431196 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-04-10 00:49:12.431200 | orchestrator | Friday 10 April 2026 00:45:18 +0000 (0:00:02.460) 0:00:44.453 ********** 2026-04-10 00:49:12.431204 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:49:12.431208 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:49:12.431212 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:49:12.431216 | orchestrator | 2026-04-10 00:49:12.431220 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-04-10 00:49:12.431224 | orchestrator | Friday 10 April 2026 00:45:20 +0000 (0:00:02.224) 0:00:46.677 ********** 2026-04-10 00:49:12.431228 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:49:12.431232 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:49:12.431235 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:49:12.431239 | orchestrator | 2026-04-10 00:49:12.431243 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-04-10 00:49:12.431247 | orchestrator | Friday 10 April 2026 00:45:20 +0000 (0:00:00.346) 0:00:47.024 ********** 2026-04-10 00:49:12.431251 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-10 00:49:12.431256 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-10 00:49:12.431260 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-10 00:49:12.431270 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-10 00:49:12.431274 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-10 00:49:12.431280 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-10 00:49:12.431285 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-10 00:49:12.431288 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-10 00:49:12.431292 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-10 00:49:12.431296 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-10 00:49:12.431300 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-10 00:49:12.431304 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-10 00:49:12.431308 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:49:12.431312 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:49:12.431317 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:49:12.431324 | orchestrator | 2026-04-10 00:49:12.431332 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-04-10 00:49:12.431342 | orchestrator | Friday 10 April 2026 00:46:04 +0000 (0:00:44.130) 0:01:31.155 ********** 2026-04-10 00:49:12.431348 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:49:12.431355 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:49:12.431362 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:49:12.431367 | orchestrator | 2026-04-10 00:49:12.431373 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-04-10 00:49:12.431379 | orchestrator | Friday 10 April 2026 00:46:05 +0000 (0:00:00.476) 0:01:31.631 ********** 2026-04-10 00:49:12.431386 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:49:12.431392 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:49:12.431399 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:49:12.431405 | orchestrator | 2026-04-10 00:49:12.431411 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-04-10 00:49:12.431418 | orchestrator | Friday 10 April 2026 00:46:06 +0000 (0:00:01.118) 0:01:32.750 ********** 2026-04-10 00:49:12.431424 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:49:12.431431 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:49:12.431438 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:49:12.431445 | orchestrator | 2026-04-10 00:49:12.431456 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-04-10 00:49:12.431463 | orchestrator | Friday 10 April 2026 00:46:07 +0000 (0:00:01.464) 0:01:34.214 ********** 2026-04-10 00:49:12.431470 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:49:12.431492 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:49:12.431498 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:49:12.431504 | orchestrator | 2026-04-10 00:49:12.431510 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-04-10 00:49:12.431516 | orchestrator | Friday 10 April 2026 00:46:47 +0000 (0:00:39.160) 0:02:13.375 ********** 2026-04-10 00:49:12.431522 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:49:12.431528 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:49:12.431533 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:49:12.431540 | orchestrator | 2026-04-10 00:49:12.431545 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-04-10 00:49:12.431558 | orchestrator | Friday 10 April 2026 00:46:47 +0000 (0:00:00.585) 0:02:13.961 ********** 2026-04-10 00:49:12.431564 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:49:12.431570 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:49:12.431576 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:49:12.431583 | orchestrator | 2026-04-10 00:49:12.431589 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-04-10 00:49:12.431595 | orchestrator | Friday 10 April 2026 00:46:48 +0000 (0:00:00.822) 0:02:14.783 ********** 2026-04-10 00:49:12.431602 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:49:12.431608 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:49:12.431615 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:49:12.431621 | orchestrator | 2026-04-10 00:49:12.431628 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-04-10 00:49:12.431634 | orchestrator | Friday 10 April 2026 00:46:49 +0000 (0:00:00.540) 0:02:15.323 ********** 2026-04-10 00:49:12.431641 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:49:12.431647 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:49:12.431654 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:49:12.431660 | orchestrator | 2026-04-10 00:49:12.431666 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-04-10 00:49:12.431673 | orchestrator | Friday 10 April 2026 00:46:49 +0000 (0:00:00.582) 0:02:15.906 ********** 2026-04-10 00:49:12.431679 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:49:12.431686 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:49:12.431692 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:49:12.431699 | orchestrator | 2026-04-10 00:49:12.431705 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-04-10 00:49:12.431711 | orchestrator | Friday 10 April 2026 00:46:49 +0000 (0:00:00.363) 0:02:16.270 ********** 2026-04-10 00:49:12.431718 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:49:12.431724 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:49:12.431731 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:49:12.431738 | orchestrator | 2026-04-10 00:49:12.431744 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-04-10 00:49:12.431751 | orchestrator | Friday 10 April 2026 00:46:50 +0000 (0:00:00.844) 0:02:17.114 ********** 2026-04-10 00:49:12.431757 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:49:12.431764 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:49:12.431770 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:49:12.431777 | orchestrator | 2026-04-10 00:49:12.431784 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-04-10 00:49:12.431790 | orchestrator | Friday 10 April 2026 00:46:51 +0000 (0:00:00.621) 0:02:17.736 ********** 2026-04-10 00:49:12.431797 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:49:12.431805 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:49:12.431812 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:49:12.431819 | orchestrator | 2026-04-10 00:49:12.431826 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-04-10 00:49:12.431834 | orchestrator | Friday 10 April 2026 00:46:52 +0000 (0:00:00.757) 0:02:18.494 ********** 2026-04-10 00:49:12.431841 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:49:12.431847 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:49:12.431854 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:49:12.431860 | orchestrator | 2026-04-10 00:49:12.431867 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-04-10 00:49:12.431873 | orchestrator | Friday 10 April 2026 00:46:52 +0000 (0:00:00.700) 0:02:19.195 ********** 2026-04-10 00:49:12.431879 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:49:12.431885 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:49:12.431892 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:49:12.431898 | orchestrator | 2026-04-10 00:49:12.431904 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-04-10 00:49:12.431911 | orchestrator | Friday 10 April 2026 00:46:53 +0000 (0:00:00.395) 0:02:19.590 ********** 2026-04-10 00:49:12.431924 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:49:12.431932 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:49:12.431938 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:49:12.431944 | orchestrator | 2026-04-10 00:49:12.431949 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-04-10 00:49:12.431955 | orchestrator | Friday 10 April 2026 00:46:53 +0000 (0:00:00.270) 0:02:19.860 ********** 2026-04-10 00:49:12.431962 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:49:12.431968 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:49:12.432394 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:49:12.432428 | orchestrator | 2026-04-10 00:49:12.432432 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-04-10 00:49:12.432437 | orchestrator | Friday 10 April 2026 00:46:54 +0000 (0:00:00.723) 0:02:20.584 ********** 2026-04-10 00:49:12.432441 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:49:12.432445 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:49:12.432449 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:49:12.432453 | orchestrator | 2026-04-10 00:49:12.432457 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-04-10 00:49:12.432462 | orchestrator | Friday 10 April 2026 00:46:55 +0000 (0:00:00.745) 0:02:21.330 ********** 2026-04-10 00:49:12.432466 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-10 00:49:12.432588 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-10 00:49:12.432595 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-10 00:49:12.432599 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-10 00:49:12.432602 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-10 00:49:12.432606 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-10 00:49:12.432610 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-10 00:49:12.432614 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-10 00:49:12.432618 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-10 00:49:12.432622 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-10 00:49:12.432626 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-10 00:49:12.432630 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-04-10 00:49:12.432633 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-10 00:49:12.432637 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-10 00:49:12.432641 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-04-10 00:49:12.432645 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-10 00:49:12.432649 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-10 00:49:12.432653 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-10 00:49:12.432656 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-10 00:49:12.432660 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-10 00:49:12.432664 | orchestrator | 2026-04-10 00:49:12.432672 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-04-10 00:49:12.432701 | orchestrator | 2026-04-10 00:49:12.432709 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-04-10 00:49:12.432715 | orchestrator | Friday 10 April 2026 00:46:59 +0000 (0:00:04.111) 0:02:25.441 ********** 2026-04-10 00:49:12.432720 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:49:12.432726 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:49:12.432733 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:49:12.432738 | orchestrator | 2026-04-10 00:49:12.432744 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-04-10 00:49:12.432750 | orchestrator | Friday 10 April 2026 00:46:59 +0000 (0:00:00.276) 0:02:25.717 ********** 2026-04-10 00:49:12.432756 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:49:12.432761 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:49:12.432767 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:49:12.432774 | orchestrator | 2026-04-10 00:49:12.432779 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-04-10 00:49:12.432785 | orchestrator | Friday 10 April 2026 00:46:59 +0000 (0:00:00.556) 0:02:26.274 ********** 2026-04-10 00:49:12.432791 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:49:12.432798 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:49:12.432804 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:49:12.432811 | orchestrator | 2026-04-10 00:49:12.432817 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-04-10 00:49:12.432823 | orchestrator | Friday 10 April 2026 00:47:00 +0000 (0:00:00.532) 0:02:26.807 ********** 2026-04-10 00:49:12.432829 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:49:12.432835 | orchestrator | 2026-04-10 00:49:12.432841 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-04-10 00:49:12.432846 | orchestrator | Friday 10 April 2026 00:47:01 +0000 (0:00:00.516) 0:02:27.323 ********** 2026-04-10 00:49:12.432852 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:49:12.432858 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:49:12.432864 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:49:12.432869 | orchestrator | 2026-04-10 00:49:12.432875 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-04-10 00:49:12.432880 | orchestrator | Friday 10 April 2026 00:47:01 +0000 (0:00:00.309) 0:02:27.632 ********** 2026-04-10 00:49:12.432886 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:49:12.432892 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:49:12.432898 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:49:12.432903 | orchestrator | 2026-04-10 00:49:12.432908 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-04-10 00:49:12.432914 | orchestrator | Friday 10 April 2026 00:47:01 +0000 (0:00:00.623) 0:02:28.256 ********** 2026-04-10 00:49:12.432920 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:49:12.432926 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:49:12.432931 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:49:12.432937 | orchestrator | 2026-04-10 00:49:12.432943 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-04-10 00:49:12.432948 | orchestrator | Friday 10 April 2026 00:47:02 +0000 (0:00:00.419) 0:02:28.675 ********** 2026-04-10 00:49:12.432953 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:49:12.432959 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:49:12.432965 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:49:12.432971 | orchestrator | 2026-04-10 00:49:12.432984 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-04-10 00:49:12.432990 | orchestrator | Friday 10 April 2026 00:47:03 +0000 (0:00:00.692) 0:02:29.367 ********** 2026-04-10 00:49:12.432995 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:49:12.433001 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:49:12.433006 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:49:12.433012 | orchestrator | 2026-04-10 00:49:12.433018 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-04-10 00:49:12.433033 | orchestrator | Friday 10 April 2026 00:47:04 +0000 (0:00:01.122) 0:02:30.489 ********** 2026-04-10 00:49:12.433039 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:49:12.433045 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:49:12.433050 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:49:12.433056 | orchestrator | 2026-04-10 00:49:12.433062 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-04-10 00:49:12.433067 | orchestrator | Friday 10 April 2026 00:47:05 +0000 (0:00:01.711) 0:02:32.201 ********** 2026-04-10 00:49:12.433072 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:49:12.433078 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:49:12.433083 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:49:12.433089 | orchestrator | 2026-04-10 00:49:12.433096 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-10 00:49:12.433103 | orchestrator | 2026-04-10 00:49:12.433109 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-10 00:49:12.433115 | orchestrator | Friday 10 April 2026 00:47:16 +0000 (0:00:10.310) 0:02:42.511 ********** 2026-04-10 00:49:12.433122 | orchestrator | ok: [testbed-manager] 2026-04-10 00:49:12.433129 | orchestrator | 2026-04-10 00:49:12.433135 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-10 00:49:12.433142 | orchestrator | Friday 10 April 2026 00:47:16 +0000 (0:00:00.656) 0:02:43.167 ********** 2026-04-10 00:49:12.433148 | orchestrator | changed: [testbed-manager] 2026-04-10 00:49:12.433154 | orchestrator | 2026-04-10 00:49:12.433160 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-10 00:49:12.433167 | orchestrator | Friday 10 April 2026 00:47:17 +0000 (0:00:00.396) 0:02:43.564 ********** 2026-04-10 00:49:12.433174 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-10 00:49:12.433183 | orchestrator | 2026-04-10 00:49:12.433189 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-10 00:49:12.433195 | orchestrator | Friday 10 April 2026 00:47:17 +0000 (0:00:00.514) 0:02:44.078 ********** 2026-04-10 00:49:12.433201 | orchestrator | changed: [testbed-manager] 2026-04-10 00:49:12.433207 | orchestrator | 2026-04-10 00:49:12.433214 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-10 00:49:12.433222 | orchestrator | Friday 10 April 2026 00:47:18 +0000 (0:00:00.907) 0:02:44.986 ********** 2026-04-10 00:49:12.433233 | orchestrator | changed: [testbed-manager] 2026-04-10 00:49:12.433239 | orchestrator | 2026-04-10 00:49:12.433245 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-10 00:49:12.433250 | orchestrator | Friday 10 April 2026 00:47:19 +0000 (0:00:00.560) 0:02:45.546 ********** 2026-04-10 00:49:12.433256 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-10 00:49:12.433262 | orchestrator | 2026-04-10 00:49:12.433267 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-10 00:49:12.433273 | orchestrator | Friday 10 April 2026 00:47:20 +0000 (0:00:01.650) 0:02:47.196 ********** 2026-04-10 00:49:12.433278 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-10 00:49:12.433284 | orchestrator | 2026-04-10 00:49:12.433290 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-10 00:49:12.433296 | orchestrator | Friday 10 April 2026 00:47:21 +0000 (0:00:00.822) 0:02:48.019 ********** 2026-04-10 00:49:12.433301 | orchestrator | changed: [testbed-manager] 2026-04-10 00:49:12.433307 | orchestrator | 2026-04-10 00:49:12.433313 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-10 00:49:12.433318 | orchestrator | Friday 10 April 2026 00:47:22 +0000 (0:00:00.409) 0:02:48.428 ********** 2026-04-10 00:49:12.433324 | orchestrator | changed: [testbed-manager] 2026-04-10 00:49:12.433329 | orchestrator | 2026-04-10 00:49:12.433335 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-04-10 00:49:12.433340 | orchestrator | 2026-04-10 00:49:12.433346 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-04-10 00:49:12.433359 | orchestrator | Friday 10 April 2026 00:47:22 +0000 (0:00:00.384) 0:02:48.813 ********** 2026-04-10 00:49:12.433365 | orchestrator | ok: [testbed-manager] 2026-04-10 00:49:12.433371 | orchestrator | 2026-04-10 00:49:12.433377 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-04-10 00:49:12.433382 | orchestrator | Friday 10 April 2026 00:47:22 +0000 (0:00:00.123) 0:02:48.937 ********** 2026-04-10 00:49:12.433388 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-04-10 00:49:12.433395 | orchestrator | 2026-04-10 00:49:12.433400 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-04-10 00:49:12.433407 | orchestrator | Friday 10 April 2026 00:47:22 +0000 (0:00:00.205) 0:02:49.143 ********** 2026-04-10 00:49:12.433412 | orchestrator | ok: [testbed-manager] 2026-04-10 00:49:12.433418 | orchestrator | 2026-04-10 00:49:12.433425 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-04-10 00:49:12.433431 | orchestrator | Friday 10 April 2026 00:47:23 +0000 (0:00:01.048) 0:02:50.192 ********** 2026-04-10 00:49:12.433436 | orchestrator | ok: [testbed-manager] 2026-04-10 00:49:12.433443 | orchestrator | 2026-04-10 00:49:12.433449 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-04-10 00:49:12.433456 | orchestrator | Friday 10 April 2026 00:47:25 +0000 (0:00:01.345) 0:02:51.538 ********** 2026-04-10 00:49:12.433463 | orchestrator | changed: [testbed-manager] 2026-04-10 00:49:12.433470 | orchestrator | 2026-04-10 00:49:12.433492 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-04-10 00:49:12.433499 | orchestrator | Friday 10 April 2026 00:47:26 +0000 (0:00:01.735) 0:02:53.273 ********** 2026-04-10 00:49:12.433506 | orchestrator | ok: [testbed-manager] 2026-04-10 00:49:12.433512 | orchestrator | 2026-04-10 00:49:12.433526 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-04-10 00:49:12.433533 | orchestrator | Friday 10 April 2026 00:47:27 +0000 (0:00:00.421) 0:02:53.694 ********** 2026-04-10 00:49:12.433539 | orchestrator | changed: [testbed-manager] 2026-04-10 00:49:12.433545 | orchestrator | 2026-04-10 00:49:12.433551 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-04-10 00:49:12.433558 | orchestrator | Friday 10 April 2026 00:47:35 +0000 (0:00:07.796) 0:03:01.490 ********** 2026-04-10 00:49:12.433564 | orchestrator | changed: [testbed-manager] 2026-04-10 00:49:12.433572 | orchestrator | 2026-04-10 00:49:12.433578 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-04-10 00:49:12.433585 | orchestrator | Friday 10 April 2026 00:47:48 +0000 (0:00:13.399) 0:03:14.890 ********** 2026-04-10 00:49:12.433591 | orchestrator | ok: [testbed-manager] 2026-04-10 00:49:12.433597 | orchestrator | 2026-04-10 00:49:12.433603 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-04-10 00:49:12.433609 | orchestrator | 2026-04-10 00:49:12.433615 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-04-10 00:49:12.433621 | orchestrator | Friday 10 April 2026 00:47:49 +0000 (0:00:00.450) 0:03:15.340 ********** 2026-04-10 00:49:12.433627 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:49:12.433634 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:49:12.433640 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:49:12.433646 | orchestrator | 2026-04-10 00:49:12.433652 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-04-10 00:49:12.433658 | orchestrator | Friday 10 April 2026 00:47:49 +0000 (0:00:00.397) 0:03:15.737 ********** 2026-04-10 00:49:12.433663 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:49:12.433669 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:49:12.433676 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:49:12.433682 | orchestrator | 2026-04-10 00:49:12.433688 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-04-10 00:49:12.433694 | orchestrator | Friday 10 April 2026 00:47:49 +0000 (0:00:00.326) 0:03:16.064 ********** 2026-04-10 00:49:12.433707 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:49:12.433714 | orchestrator | 2026-04-10 00:49:12.433720 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-04-10 00:49:12.433726 | orchestrator | Friday 10 April 2026 00:47:50 +0000 (0:00:00.559) 0:03:16.623 ********** 2026-04-10 00:49:12.433733 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-10 00:49:12.433738 | orchestrator | 2026-04-10 00:49:12.433745 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-04-10 00:49:12.433751 | orchestrator | Friday 10 April 2026 00:47:51 +0000 (0:00:00.795) 0:03:17.419 ********** 2026-04-10 00:49:12.433763 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-10 00:49:12.433769 | orchestrator | 2026-04-10 00:49:12.433775 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-04-10 00:49:12.433781 | orchestrator | Friday 10 April 2026 00:47:51 +0000 (0:00:00.725) 0:03:18.145 ********** 2026-04-10 00:49:12.433787 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:49:12.433793 | orchestrator | 2026-04-10 00:49:12.433799 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-04-10 00:49:12.433805 | orchestrator | Friday 10 April 2026 00:47:52 +0000 (0:00:00.253) 0:03:18.398 ********** 2026-04-10 00:49:12.433811 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-10 00:49:12.433817 | orchestrator | 2026-04-10 00:49:12.433823 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-04-10 00:49:12.433829 | orchestrator | Friday 10 April 2026 00:47:53 +0000 (0:00:00.910) 0:03:19.308 ********** 2026-04-10 00:49:12.433836 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:49:12.433843 | orchestrator | 2026-04-10 00:49:12.433848 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-04-10 00:49:12.433854 | orchestrator | Friday 10 April 2026 00:47:53 +0000 (0:00:00.105) 0:03:19.414 ********** 2026-04-10 00:49:12.433861 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:49:12.433867 | orchestrator | 2026-04-10 00:49:12.433873 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-04-10 00:49:12.433879 | orchestrator | Friday 10 April 2026 00:47:53 +0000 (0:00:00.096) 0:03:19.510 ********** 2026-04-10 00:49:12.433886 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:49:12.433892 | orchestrator | 2026-04-10 00:49:12.433899 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-04-10 00:49:12.433906 | orchestrator | Friday 10 April 2026 00:47:53 +0000 (0:00:00.100) 0:03:19.610 ********** 2026-04-10 00:49:12.433912 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:49:12.433918 | orchestrator | 2026-04-10 00:49:12.433925 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-04-10 00:49:12.433931 | orchestrator | Friday 10 April 2026 00:47:53 +0000 (0:00:00.106) 0:03:19.717 ********** 2026-04-10 00:49:12.433937 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-10 00:49:12.433944 | orchestrator | 2026-04-10 00:49:12.433950 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-04-10 00:49:12.433956 | orchestrator | Friday 10 April 2026 00:47:57 +0000 (0:00:04.410) 0:03:24.128 ********** 2026-04-10 00:49:12.433962 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-04-10 00:49:12.433968 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-04-10 00:49:12.433975 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-04-10 00:49:12.433981 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-04-10 00:49:12.433987 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-04-10 00:49:12.433993 | orchestrator | 2026-04-10 00:49:12.434000 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-04-10 00:49:12.434007 | orchestrator | Friday 10 April 2026 00:48:40 +0000 (0:00:42.748) 0:04:06.876 ********** 2026-04-10 00:49:12.434166 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-10 00:49:12.434176 | orchestrator | 2026-04-10 00:49:12.434182 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-04-10 00:49:12.434188 | orchestrator | Friday 10 April 2026 00:48:42 +0000 (0:00:01.425) 0:04:08.301 ********** 2026-04-10 00:49:12.434194 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-10 00:49:12.434199 | orchestrator | 2026-04-10 00:49:12.434205 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-04-10 00:49:12.434211 | orchestrator | Friday 10 April 2026 00:48:43 +0000 (0:00:01.726) 0:04:10.028 ********** 2026-04-10 00:49:12.434217 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-10 00:49:12.434223 | orchestrator | 2026-04-10 00:49:12.434229 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-04-10 00:49:12.434235 | orchestrator | Friday 10 April 2026 00:48:45 +0000 (0:00:01.318) 0:04:11.347 ********** 2026-04-10 00:49:12.434240 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:49:12.434246 | orchestrator | 2026-04-10 00:49:12.434251 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-04-10 00:49:12.434257 | orchestrator | Friday 10 April 2026 00:48:45 +0000 (0:00:00.253) 0:04:11.600 ********** 2026-04-10 00:49:12.434263 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-04-10 00:49:12.434268 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-04-10 00:49:12.434274 | orchestrator | 2026-04-10 00:49:12.434279 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-04-10 00:49:12.434285 | orchestrator | Friday 10 April 2026 00:48:47 +0000 (0:00:02.122) 0:04:13.722 ********** 2026-04-10 00:49:12.434291 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:49:12.434297 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:49:12.434303 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:49:12.434308 | orchestrator | 2026-04-10 00:49:12.434315 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-04-10 00:49:12.434321 | orchestrator | Friday 10 April 2026 00:48:47 +0000 (0:00:00.286) 0:04:14.008 ********** 2026-04-10 00:49:12.434328 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:49:12.434334 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:49:12.434340 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:49:12.434346 | orchestrator | 2026-04-10 00:49:12.434352 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-04-10 00:49:12.434359 | orchestrator | 2026-04-10 00:49:12.434364 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-04-10 00:49:12.434367 | orchestrator | Friday 10 April 2026 00:48:48 +0000 (0:00:00.853) 0:04:14.862 ********** 2026-04-10 00:49:12.434371 | orchestrator | ok: [testbed-manager] 2026-04-10 00:49:12.434375 | orchestrator | 2026-04-10 00:49:12.434384 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-04-10 00:49:12.434388 | orchestrator | Friday 10 April 2026 00:48:48 +0000 (0:00:00.156) 0:04:15.019 ********** 2026-04-10 00:49:12.434392 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-04-10 00:49:12.434395 | orchestrator | 2026-04-10 00:49:12.434399 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-04-10 00:49:12.434403 | orchestrator | Friday 10 April 2026 00:48:49 +0000 (0:00:00.306) 0:04:15.325 ********** 2026-04-10 00:49:12.434407 | orchestrator | changed: [testbed-manager] 2026-04-10 00:49:12.434410 | orchestrator | 2026-04-10 00:49:12.434414 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-04-10 00:49:12.434418 | orchestrator | 2026-04-10 00:49:12.434423 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-04-10 00:49:12.434429 | orchestrator | Friday 10 April 2026 00:48:53 +0000 (0:00:04.662) 0:04:19.988 ********** 2026-04-10 00:49:12.434434 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:49:12.434440 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:49:12.434454 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:49:12.434464 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:49:12.434469 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:49:12.434498 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:49:12.434504 | orchestrator | 2026-04-10 00:49:12.434510 | orchestrator | TASK [Manage labels] *********************************************************** 2026-04-10 00:49:12.434515 | orchestrator | Friday 10 April 2026 00:48:54 +0000 (0:00:00.633) 0:04:20.622 ********** 2026-04-10 00:49:12.434520 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-10 00:49:12.434526 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-10 00:49:12.434531 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-10 00:49:12.434537 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-10 00:49:12.434543 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-10 00:49:12.434549 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-10 00:49:12.434555 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-10 00:49:12.434562 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-10 00:49:12.434568 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-10 00:49:12.434574 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-10 00:49:12.434580 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-10 00:49:12.434586 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-10 00:49:12.434600 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-10 00:49:12.434608 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-10 00:49:12.434614 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-10 00:49:12.434620 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-10 00:49:12.434626 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-10 00:49:12.434632 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-10 00:49:12.434638 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-10 00:49:12.434645 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-10 00:49:12.434651 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-10 00:49:12.434657 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-10 00:49:12.434663 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-10 00:49:12.434669 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-10 00:49:12.434675 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-10 00:49:12.434681 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-10 00:49:12.434685 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-10 00:49:12.434689 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-10 00:49:12.434693 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-10 00:49:12.434700 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-10 00:49:12.434711 | orchestrator | 2026-04-10 00:49:12.434718 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-04-10 00:49:12.434724 | orchestrator | Friday 10 April 2026 00:49:08 +0000 (0:00:14.610) 0:04:35.232 ********** 2026-04-10 00:49:12.434730 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:49:12.434737 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:49:12.434743 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:49:12.434749 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:49:12.434755 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:49:12.434766 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:49:12.434772 | orchestrator | 2026-04-10 00:49:12.434780 | orchestrator | TASK [Manage taints] *********************************************************** 2026-04-10 00:49:12.434784 | orchestrator | Friday 10 April 2026 00:49:09 +0000 (0:00:00.479) 0:04:35.712 ********** 2026-04-10 00:49:12.434790 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:49:12.434796 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:49:12.434802 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:49:12.434808 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:49:12.434814 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:49:12.434820 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:49:12.434827 | orchestrator | 2026-04-10 00:49:12.434833 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:49:12.434839 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:49:12.434848 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-10 00:49:12.434855 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-10 00:49:12.434861 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-10 00:49:12.434867 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-10 00:49:12.434873 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-10 00:49:12.434880 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-10 00:49:12.434886 | orchestrator | 2026-04-10 00:49:12.434893 | orchestrator | 2026-04-10 00:49:12.434900 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:49:12.434905 | orchestrator | Friday 10 April 2026 00:49:09 +0000 (0:00:00.546) 0:04:36.258 ********** 2026-04-10 00:49:12.434912 | orchestrator | =============================================================================== 2026-04-10 00:49:12.434917 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 44.13s 2026-04-10 00:49:12.434924 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.75s 2026-04-10 00:49:12.434931 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 39.16s 2026-04-10 00:49:12.434942 | orchestrator | Manage labels ---------------------------------------------------------- 14.61s 2026-04-10 00:49:12.434948 | orchestrator | kubectl : Install required packages ------------------------------------ 13.40s 2026-04-10 00:49:12.434955 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.31s 2026-04-10 00:49:12.434961 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.79s 2026-04-10 00:49:12.434968 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.05s 2026-04-10 00:49:12.434974 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 4.66s 2026-04-10 00:49:12.434985 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.41s 2026-04-10 00:49:12.434991 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 4.11s 2026-04-10 00:49:12.434997 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 3.16s 2026-04-10 00:49:12.435003 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.46s 2026-04-10 00:49:12.435010 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.27s 2026-04-10 00:49:12.435016 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.22s 2026-04-10 00:49:12.435022 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.12s 2026-04-10 00:49:12.435029 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 2.11s 2026-04-10 00:49:12.435035 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.09s 2026-04-10 00:49:12.435041 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 1.92s 2026-04-10 00:49:12.435047 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.76s 2026-04-10 00:49:12.435053 | orchestrator | 2026-04-10 00:49:12 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:49:12.435059 | orchestrator | 2026-04-10 00:49:12 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:49:15.505647 | orchestrator | 2026-04-10 00:49:15 | INFO  | Task e8b44b08-fca7-489c-ad6c-cffa7b4acfa6 is in state STARTED 2026-04-10 00:49:15.506668 | orchestrator | 2026-04-10 00:49:15 | INFO  | Task 8b78ff2f-62f8-4625-bd57-5883ecb6852a is in state STARTED 2026-04-10 00:49:15.506723 | orchestrator | 2026-04-10 00:49:15 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:49:15.508767 | orchestrator | 2026-04-10 00:49:15 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:49:15.509679 | orchestrator | 2026-04-10 00:49:15 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:49:15.509706 | orchestrator | 2026-04-10 00:49:15 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:49:15.509711 | orchestrator | 2026-04-10 00:49:15 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:49:18.547104 | orchestrator | 2026-04-10 00:49:18 | INFO  | Task e8b44b08-fca7-489c-ad6c-cffa7b4acfa6 is in state SUCCESS 2026-04-10 00:49:18.547214 | orchestrator | 2026-04-10 00:49:18 | INFO  | Task 8b78ff2f-62f8-4625-bd57-5883ecb6852a is in state STARTED 2026-04-10 00:49:18.547249 | orchestrator | 2026-04-10 00:49:18 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:49:18.547964 | orchestrator | 2026-04-10 00:49:18 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:49:18.548346 | orchestrator | 2026-04-10 00:49:18 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:49:18.549068 | orchestrator | 2026-04-10 00:49:18 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:49:18.549099 | orchestrator | 2026-04-10 00:49:18 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:49:21.586394 | orchestrator | 2026-04-10 00:49:21 | INFO  | Task 8b78ff2f-62f8-4625-bd57-5883ecb6852a is in state SUCCESS 2026-04-10 00:49:21.588604 | orchestrator | 2026-04-10 00:49:21 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:49:21.592026 | orchestrator | 2026-04-10 00:49:21 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:49:21.595178 | orchestrator | 2026-04-10 00:49:21 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:49:21.598991 | orchestrator | 2026-04-10 00:49:21 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:49:21.599229 | orchestrator | 2026-04-10 00:49:21 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:49:24.645614 | orchestrator | 2026-04-10 00:49:24 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:49:24.645819 | orchestrator | 2026-04-10 00:49:24 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:49:24.646294 | orchestrator | 2026-04-10 00:49:24 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:49:24.647092 | orchestrator | 2026-04-10 00:49:24 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:49:24.649807 | orchestrator | 2026-04-10 00:49:24 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:49:27.684989 | orchestrator | 2026-04-10 00:49:27 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:49:27.686922 | orchestrator | 2026-04-10 00:49:27 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:49:27.689109 | orchestrator | 2026-04-10 00:49:27 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:49:27.690796 | orchestrator | 2026-04-10 00:49:27 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:49:27.690847 | orchestrator | 2026-04-10 00:49:27 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:49:30.724153 | orchestrator | 2026-04-10 00:49:30 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:49:30.724256 | orchestrator | 2026-04-10 00:49:30 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:49:30.725457 | orchestrator | 2026-04-10 00:49:30 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:49:30.726270 | orchestrator | 2026-04-10 00:49:30 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:49:30.726324 | orchestrator | 2026-04-10 00:49:30 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:49:33.764100 | orchestrator | 2026-04-10 00:49:33 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:49:33.764481 | orchestrator | 2026-04-10 00:49:33 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:49:33.764663 | orchestrator | 2026-04-10 00:49:33 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:49:33.766136 | orchestrator | 2026-04-10 00:49:33 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:49:33.766186 | orchestrator | 2026-04-10 00:49:33 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:49:36.793766 | orchestrator | 2026-04-10 00:49:36 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:49:36.793844 | orchestrator | 2026-04-10 00:49:36 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:49:36.794665 | orchestrator | 2026-04-10 00:49:36 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:49:36.795455 | orchestrator | 2026-04-10 00:49:36 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:49:36.795552 | orchestrator | 2026-04-10 00:49:36 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:49:39.819023 | orchestrator | 2026-04-10 00:49:39 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:49:39.820082 | orchestrator | 2026-04-10 00:49:39 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:49:39.821362 | orchestrator | 2026-04-10 00:49:39 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:49:39.822644 | orchestrator | 2026-04-10 00:49:39 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:49:39.822686 | orchestrator | 2026-04-10 00:49:39 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:49:42.862431 | orchestrator | 2026-04-10 00:49:42 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:49:42.863684 | orchestrator | 2026-04-10 00:49:42 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:49:42.865741 | orchestrator | 2026-04-10 00:49:42 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:49:42.866949 | orchestrator | 2026-04-10 00:49:42 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:49:42.867607 | orchestrator | 2026-04-10 00:49:42 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:49:45.907482 | orchestrator | 2026-04-10 00:49:45 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:49:45.908200 | orchestrator | 2026-04-10 00:49:45 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:49:45.908645 | orchestrator | 2026-04-10 00:49:45 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:49:45.909548 | orchestrator | 2026-04-10 00:49:45 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:49:45.909561 | orchestrator | 2026-04-10 00:49:45 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:49:48.951291 | orchestrator | 2026-04-10 00:49:48 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:49:48.951981 | orchestrator | 2026-04-10 00:49:48 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:49:48.952477 | orchestrator | 2026-04-10 00:49:48 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:49:48.955517 | orchestrator | 2026-04-10 00:49:48 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state STARTED 2026-04-10 00:49:48.955561 | orchestrator | 2026-04-10 00:49:48 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:49:51.996387 | orchestrator | 2026-04-10 00:49:51 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:49:51.999266 | orchestrator | 2026-04-10 00:49:51 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:49:52.003417 | orchestrator | 2026-04-10 00:49:52 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:49:52.006160 | orchestrator | 2026-04-10 00:49:52 | INFO  | Task 03132d02-4a40-40fb-b2bc-b73c6ffcf00f is in state SUCCESS 2026-04-10 00:49:52.006303 | orchestrator | 2026-04-10 00:49:52.006343 | orchestrator | 2026-04-10 00:49:52.006352 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-04-10 00:49:52.006359 | orchestrator | 2026-04-10 00:49:52.006366 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-10 00:49:52.006372 | orchestrator | Friday 10 April 2026 00:49:13 +0000 (0:00:00.241) 0:00:00.241 ********** 2026-04-10 00:49:52.006379 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-10 00:49:52.006387 | orchestrator | 2026-04-10 00:49:52.006391 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-10 00:49:52.006395 | orchestrator | Friday 10 April 2026 00:49:14 +0000 (0:00:01.099) 0:00:01.341 ********** 2026-04-10 00:49:52.006413 | orchestrator | changed: [testbed-manager] 2026-04-10 00:49:52.006418 | orchestrator | 2026-04-10 00:49:52.006429 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-04-10 00:49:52.006434 | orchestrator | Friday 10 April 2026 00:49:16 +0000 (0:00:01.233) 0:00:02.574 ********** 2026-04-10 00:49:52.006437 | orchestrator | changed: [testbed-manager] 2026-04-10 00:49:52.006441 | orchestrator | 2026-04-10 00:49:52.006445 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:49:52.006449 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:49:52.006455 | orchestrator | 2026-04-10 00:49:52.006459 | orchestrator | 2026-04-10 00:49:52.006463 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:49:52.006467 | orchestrator | Friday 10 April 2026 00:49:16 +0000 (0:00:00.428) 0:00:03.003 ********** 2026-04-10 00:49:52.006471 | orchestrator | =============================================================================== 2026-04-10 00:49:52.006474 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.23s 2026-04-10 00:49:52.006478 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.10s 2026-04-10 00:49:52.006482 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.43s 2026-04-10 00:49:52.006486 | orchestrator | 2026-04-10 00:49:52.006491 | orchestrator | 2026-04-10 00:49:52.006510 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-10 00:49:52.006518 | orchestrator | 2026-04-10 00:49:52.006525 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-10 00:49:52.006532 | orchestrator | Friday 10 April 2026 00:49:13 +0000 (0:00:00.191) 0:00:00.191 ********** 2026-04-10 00:49:52.006538 | orchestrator | ok: [testbed-manager] 2026-04-10 00:49:52.006546 | orchestrator | 2026-04-10 00:49:52.006553 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-10 00:49:52.006558 | orchestrator | Friday 10 April 2026 00:49:14 +0000 (0:00:01.062) 0:00:01.254 ********** 2026-04-10 00:49:52.006564 | orchestrator | ok: [testbed-manager] 2026-04-10 00:49:52.006573 | orchestrator | 2026-04-10 00:49:52.006582 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-10 00:49:52.006589 | orchestrator | Friday 10 April 2026 00:49:14 +0000 (0:00:00.553) 0:00:01.808 ********** 2026-04-10 00:49:52.006595 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-10 00:49:52.006602 | orchestrator | 2026-04-10 00:49:52.006608 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-10 00:49:52.006615 | orchestrator | Friday 10 April 2026 00:49:15 +0000 (0:00:00.939) 0:00:02.747 ********** 2026-04-10 00:49:52.006622 | orchestrator | changed: [testbed-manager] 2026-04-10 00:49:52.006629 | orchestrator | 2026-04-10 00:49:52.006636 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-10 00:49:52.006643 | orchestrator | Friday 10 April 2026 00:49:16 +0000 (0:00:01.011) 0:00:03.759 ********** 2026-04-10 00:49:52.006648 | orchestrator | changed: [testbed-manager] 2026-04-10 00:49:52.006652 | orchestrator | 2026-04-10 00:49:52.006656 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-10 00:49:52.006660 | orchestrator | Friday 10 April 2026 00:49:17 +0000 (0:00:00.458) 0:00:04.217 ********** 2026-04-10 00:49:52.006665 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-10 00:49:52.006723 | orchestrator | 2026-04-10 00:49:52.006732 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-10 00:49:52.006737 | orchestrator | Friday 10 April 2026 00:49:18 +0000 (0:00:01.579) 0:00:05.797 ********** 2026-04-10 00:49:52.006741 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-10 00:49:52.006746 | orchestrator | 2026-04-10 00:49:52.006750 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-10 00:49:52.006754 | orchestrator | Friday 10 April 2026 00:49:19 +0000 (0:00:00.781) 0:00:06.578 ********** 2026-04-10 00:49:52.006765 | orchestrator | ok: [testbed-manager] 2026-04-10 00:49:52.006769 | orchestrator | 2026-04-10 00:49:52.006773 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-10 00:49:52.006777 | orchestrator | Friday 10 April 2026 00:49:20 +0000 (0:00:00.372) 0:00:06.951 ********** 2026-04-10 00:49:52.006786 | orchestrator | ok: [testbed-manager] 2026-04-10 00:49:52.006791 | orchestrator | 2026-04-10 00:49:52.006795 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:49:52.006800 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:49:52.006804 | orchestrator | 2026-04-10 00:49:52.006808 | orchestrator | 2026-04-10 00:49:52.006812 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:49:52.006817 | orchestrator | Friday 10 April 2026 00:49:20 +0000 (0:00:00.288) 0:00:07.239 ********** 2026-04-10 00:49:52.006821 | orchestrator | =============================================================================== 2026-04-10 00:49:52.006825 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.58s 2026-04-10 00:49:52.006828 | orchestrator | Get home directory of operator user ------------------------------------- 1.06s 2026-04-10 00:49:52.006833 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.01s 2026-04-10 00:49:52.006847 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.94s 2026-04-10 00:49:52.006852 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.78s 2026-04-10 00:49:52.006856 | orchestrator | Create .kube directory -------------------------------------------------- 0.55s 2026-04-10 00:49:52.006861 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.46s 2026-04-10 00:49:52.006865 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.37s 2026-04-10 00:49:52.006869 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.29s 2026-04-10 00:49:52.006873 | orchestrator | 2026-04-10 00:49:52.007369 | orchestrator | 2026-04-10 00:49:52.007424 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-04-10 00:49:52.007433 | orchestrator | 2026-04-10 00:49:52.007438 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-10 00:49:52.007443 | orchestrator | Friday 10 April 2026 00:47:29 +0000 (0:00:00.127) 0:00:00.127 ********** 2026-04-10 00:49:52.007448 | orchestrator | ok: [localhost] => { 2026-04-10 00:49:52.007453 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-04-10 00:49:52.007459 | orchestrator | } 2026-04-10 00:49:52.007464 | orchestrator | 2026-04-10 00:49:52.007469 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-04-10 00:49:52.007474 | orchestrator | Friday 10 April 2026 00:47:29 +0000 (0:00:00.031) 0:00:00.159 ********** 2026-04-10 00:49:52.007479 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-04-10 00:49:52.007485 | orchestrator | ...ignoring 2026-04-10 00:49:52.007490 | orchestrator | 2026-04-10 00:49:52.007537 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-04-10 00:49:52.007543 | orchestrator | Friday 10 April 2026 00:47:34 +0000 (0:00:04.169) 0:00:04.329 ********** 2026-04-10 00:49:52.007548 | orchestrator | skipping: [localhost] 2026-04-10 00:49:52.007553 | orchestrator | 2026-04-10 00:49:52.007558 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-04-10 00:49:52.007563 | orchestrator | Friday 10 April 2026 00:47:34 +0000 (0:00:00.084) 0:00:04.413 ********** 2026-04-10 00:49:52.007568 | orchestrator | ok: [localhost] 2026-04-10 00:49:52.007574 | orchestrator | 2026-04-10 00:49:52.007582 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-10 00:49:52.007590 | orchestrator | 2026-04-10 00:49:52.007627 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-10 00:49:52.007638 | orchestrator | Friday 10 April 2026 00:47:34 +0000 (0:00:00.271) 0:00:04.684 ********** 2026-04-10 00:49:52.007646 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:49:52.007655 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:49:52.007662 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:49:52.007669 | orchestrator | 2026-04-10 00:49:52.007674 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-10 00:49:52.007679 | orchestrator | Friday 10 April 2026 00:47:34 +0000 (0:00:00.426) 0:00:05.111 ********** 2026-04-10 00:49:52.007684 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-04-10 00:49:52.007689 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-04-10 00:49:52.007694 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-04-10 00:49:52.007699 | orchestrator | 2026-04-10 00:49:52.007703 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-04-10 00:49:52.007708 | orchestrator | 2026-04-10 00:49:52.007713 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-10 00:49:52.007718 | orchestrator | Friday 10 April 2026 00:47:35 +0000 (0:00:00.508) 0:00:05.619 ********** 2026-04-10 00:49:52.007724 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:49:52.007730 | orchestrator | 2026-04-10 00:49:52.007738 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-10 00:49:52.007749 | orchestrator | Friday 10 April 2026 00:47:35 +0000 (0:00:00.539) 0:00:06.159 ********** 2026-04-10 00:49:52.007758 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:49:52.007766 | orchestrator | 2026-04-10 00:49:52.007773 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-04-10 00:49:52.007781 | orchestrator | Friday 10 April 2026 00:47:37 +0000 (0:00:01.548) 0:00:07.708 ********** 2026-04-10 00:49:52.007790 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:49:52.007798 | orchestrator | 2026-04-10 00:49:52.007817 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-04-10 00:49:52.007825 | orchestrator | Friday 10 April 2026 00:47:38 +0000 (0:00:00.622) 0:00:08.330 ********** 2026-04-10 00:49:52.007833 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:49:52.007841 | orchestrator | 2026-04-10 00:49:52.007850 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-04-10 00:49:52.007858 | orchestrator | Friday 10 April 2026 00:47:38 +0000 (0:00:00.412) 0:00:08.742 ********** 2026-04-10 00:49:52.007866 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:49:52.007874 | orchestrator | 2026-04-10 00:49:52.007879 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-04-10 00:49:52.007884 | orchestrator | Friday 10 April 2026 00:47:38 +0000 (0:00:00.396) 0:00:09.139 ********** 2026-04-10 00:49:52.007889 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:49:52.007893 | orchestrator | 2026-04-10 00:49:52.007898 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-10 00:49:52.007903 | orchestrator | Friday 10 April 2026 00:47:39 +0000 (0:00:00.350) 0:00:09.490 ********** 2026-04-10 00:49:52.007908 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:49:52.007913 | orchestrator | 2026-04-10 00:49:52.007918 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-10 00:49:52.007923 | orchestrator | Friday 10 April 2026 00:47:41 +0000 (0:00:01.780) 0:00:11.270 ********** 2026-04-10 00:49:52.007928 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:49:52.007933 | orchestrator | 2026-04-10 00:49:52.007941 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-04-10 00:49:52.007952 | orchestrator | Friday 10 April 2026 00:47:42 +0000 (0:00:01.791) 0:00:13.061 ********** 2026-04-10 00:49:52.007965 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:49:52.007981 | orchestrator | 2026-04-10 00:49:52.007988 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-04-10 00:49:52.007997 | orchestrator | Friday 10 April 2026 00:47:44 +0000 (0:00:01.729) 0:00:14.791 ********** 2026-04-10 00:49:52.008005 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:49:52.008012 | orchestrator | 2026-04-10 00:49:52.008042 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-04-10 00:49:52.008051 | orchestrator | Friday 10 April 2026 00:47:44 +0000 (0:00:00.372) 0:00:15.163 ********** 2026-04-10 00:49:52.008063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-10 00:49:52.008075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-10 00:49:52.008085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-10 00:49:52.008094 | orchestrator | 2026-04-10 00:49:52.008103 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-04-10 00:49:52.008110 | orchestrator | Friday 10 April 2026 00:47:46 +0000 (0:00:01.265) 0:00:16.429 ********** 2026-04-10 00:49:52.008130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-10 00:49:52.008147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-10 00:49:52.008158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-10 00:49:52.008167 | orchestrator | 2026-04-10 00:49:52.008175 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-04-10 00:49:52.008181 | orchestrator | Friday 10 April 2026 00:47:47 +0000 (0:00:01.589) 0:00:18.019 ********** 2026-04-10 00:49:52.008186 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-10 00:49:52.008194 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-10 00:49:52.008202 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-10 00:49:52.008213 | orchestrator | 2026-04-10 00:49:52.008223 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-04-10 00:49:52.008231 | orchestrator | Friday 10 April 2026 00:47:49 +0000 (0:00:01.488) 0:00:19.507 ********** 2026-04-10 00:49:52.008246 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-10 00:49:52.008254 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-10 00:49:52.008263 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-10 00:49:52.008271 | orchestrator | 2026-04-10 00:49:52.008279 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-04-10 00:49:52.008287 | orchestrator | Friday 10 April 2026 00:47:51 +0000 (0:00:02.168) 0:00:21.676 ********** 2026-04-10 00:49:52.008295 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-10 00:49:52.008304 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-10 00:49:52.008312 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-10 00:49:52.008320 | orchestrator | 2026-04-10 00:49:52.008330 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-04-10 00:49:52.008341 | orchestrator | Friday 10 April 2026 00:47:53 +0000 (0:00:02.209) 0:00:23.886 ********** 2026-04-10 00:49:52.008360 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-10 00:49:52.008370 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-10 00:49:52.008378 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-10 00:49:52.008386 | orchestrator | 2026-04-10 00:49:52.008391 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-04-10 00:49:52.008398 | orchestrator | Friday 10 April 2026 00:47:55 +0000 (0:00:01.638) 0:00:25.524 ********** 2026-04-10 00:49:52.008406 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-10 00:49:52.008413 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-10 00:49:52.008421 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-10 00:49:52.008429 | orchestrator | 2026-04-10 00:49:52.008436 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-04-10 00:49:52.008444 | orchestrator | Friday 10 April 2026 00:47:57 +0000 (0:00:01.898) 0:00:27.423 ********** 2026-04-10 00:49:52.008452 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-10 00:49:52.008460 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-10 00:49:52.008468 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-10 00:49:52.008476 | orchestrator | 2026-04-10 00:49:52.008485 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-10 00:49:52.008493 | orchestrator | Friday 10 April 2026 00:47:58 +0000 (0:00:01.574) 0:00:28.998 ********** 2026-04-10 00:49:52.008540 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:49:52.008548 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:49:52.008556 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:49:52.008564 | orchestrator | 2026-04-10 00:49:52.008572 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-04-10 00:49:52.008581 | orchestrator | Friday 10 April 2026 00:47:59 +0000 (0:00:00.639) 0:00:29.637 ********** 2026-04-10 00:49:52.008590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-10 00:49:52.008607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-10 00:49:52.008619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-10 00:49:52.008625 | orchestrator | 2026-04-10 00:49:52.008630 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-04-10 00:49:52.008635 | orchestrator | Friday 10 April 2026 00:48:00 +0000 (0:00:01.291) 0:00:30.928 ********** 2026-04-10 00:49:52.008640 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:49:52.008644 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:49:52.008649 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:49:52.008654 | orchestrator | 2026-04-10 00:49:52.008659 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-04-10 00:49:52.008664 | orchestrator | Friday 10 April 2026 00:48:01 +0000 (0:00:01.003) 0:00:31.932 ********** 2026-04-10 00:49:52.008669 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:49:52.008673 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:49:52.008678 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:49:52.008683 | orchestrator | 2026-04-10 00:49:52.008688 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-04-10 00:49:52.008693 | orchestrator | Friday 10 April 2026 00:48:11 +0000 (0:00:09.755) 0:00:41.687 ********** 2026-04-10 00:49:52.008698 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:49:52.008703 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:49:52.008708 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:49:52.008716 | orchestrator | 2026-04-10 00:49:52.008722 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-10 00:49:52.008727 | orchestrator | 2026-04-10 00:49:52.008731 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-10 00:49:52.008736 | orchestrator | Friday 10 April 2026 00:48:11 +0000 (0:00:00.293) 0:00:41.981 ********** 2026-04-10 00:49:52.008779 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:49:52.008793 | orchestrator | 2026-04-10 00:49:52.008798 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-10 00:49:52.008803 | orchestrator | Friday 10 April 2026 00:48:12 +0000 (0:00:00.516) 0:00:42.498 ********** 2026-04-10 00:49:52.008808 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:49:52.008813 | orchestrator | 2026-04-10 00:49:52.008818 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-10 00:49:52.008823 | orchestrator | Friday 10 April 2026 00:48:12 +0000 (0:00:00.202) 0:00:42.700 ********** 2026-04-10 00:49:52.008828 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:49:52.008833 | orchestrator | 2026-04-10 00:49:52.008838 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-10 00:49:52.008843 | orchestrator | Friday 10 April 2026 00:48:14 +0000 (0:00:01.846) 0:00:44.546 ********** 2026-04-10 00:49:52.008847 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:49:52.008852 | orchestrator | 2026-04-10 00:49:52.008857 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-10 00:49:52.008862 | orchestrator | 2026-04-10 00:49:52.008868 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-10 00:49:52.008872 | orchestrator | Friday 10 April 2026 00:49:09 +0000 (0:00:54.693) 0:01:39.240 ********** 2026-04-10 00:49:52.008877 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:49:52.008882 | orchestrator | 2026-04-10 00:49:52.008887 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-10 00:49:52.008892 | orchestrator | Friday 10 April 2026 00:49:09 +0000 (0:00:00.861) 0:01:40.102 ********** 2026-04-10 00:49:52.008897 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:49:52.008902 | orchestrator | 2026-04-10 00:49:52.008907 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-10 00:49:52.008912 | orchestrator | Friday 10 April 2026 00:49:10 +0000 (0:00:00.376) 0:01:40.479 ********** 2026-04-10 00:49:52.008917 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:49:52.008922 | orchestrator | 2026-04-10 00:49:52.008927 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-10 00:49:52.008932 | orchestrator | Friday 10 April 2026 00:49:11 +0000 (0:00:01.611) 0:01:42.090 ********** 2026-04-10 00:49:52.008936 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:49:52.008941 | orchestrator | 2026-04-10 00:49:52.008946 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-10 00:49:52.008951 | orchestrator | 2026-04-10 00:49:52.008956 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-10 00:49:52.008961 | orchestrator | Friday 10 April 2026 00:49:28 +0000 (0:00:16.215) 0:01:58.305 ********** 2026-04-10 00:49:52.008966 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:49:52.008972 | orchestrator | 2026-04-10 00:49:52.008977 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-10 00:49:52.008981 | orchestrator | Friday 10 April 2026 00:49:28 +0000 (0:00:00.642) 0:01:58.948 ********** 2026-04-10 00:49:52.008987 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:49:52.008993 | orchestrator | 2026-04-10 00:49:52.009002 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-10 00:49:52.009015 | orchestrator | Friday 10 April 2026 00:49:28 +0000 (0:00:00.206) 0:01:59.154 ********** 2026-04-10 00:49:52.009024 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:49:52.009032 | orchestrator | 2026-04-10 00:49:52.009039 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-10 00:49:52.009053 | orchestrator | Friday 10 April 2026 00:49:30 +0000 (0:00:01.697) 0:02:00.852 ********** 2026-04-10 00:49:52.009071 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:49:52.009078 | orchestrator | 2026-04-10 00:49:52.009086 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-04-10 00:49:52.009094 | orchestrator | 2026-04-10 00:49:52.009102 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-04-10 00:49:52.009110 | orchestrator | Friday 10 April 2026 00:49:46 +0000 (0:00:16.022) 0:02:16.875 ********** 2026-04-10 00:49:52.009119 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:49:52.009127 | orchestrator | 2026-04-10 00:49:52.009136 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-04-10 00:49:52.009144 | orchestrator | Friday 10 April 2026 00:49:47 +0000 (0:00:00.816) 0:02:17.691 ********** 2026-04-10 00:49:52.009151 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:49:52.009156 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:49:52.009161 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:49:52.009166 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-10 00:49:52.009171 | orchestrator | enable_outward_rabbitmq_True 2026-04-10 00:49:52.009176 | orchestrator | 2026-04-10 00:49:52.009180 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-04-10 00:49:52.009185 | orchestrator | skipping: no hosts matched 2026-04-10 00:49:52.009191 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-10 00:49:52.009196 | orchestrator | outward_rabbitmq_restart 2026-04-10 00:49:52.009201 | orchestrator | 2026-04-10 00:49:52.009206 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-04-10 00:49:52.009211 | orchestrator | skipping: no hosts matched 2026-04-10 00:49:52.009216 | orchestrator | 2026-04-10 00:49:52.009221 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-04-10 00:49:52.009226 | orchestrator | skipping: no hosts matched 2026-04-10 00:49:52.009231 | orchestrator | 2026-04-10 00:49:52.009236 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:49:52.009241 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-10 00:49:52.009247 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-10 00:49:52.009252 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 00:49:52.009257 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 00:49:52.009262 | orchestrator | 2026-04-10 00:49:52.009267 | orchestrator | 2026-04-10 00:49:52.009272 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:49:52.009277 | orchestrator | Friday 10 April 2026 00:49:49 +0000 (0:00:02.273) 0:02:19.965 ********** 2026-04-10 00:49:52.009282 | orchestrator | =============================================================================== 2026-04-10 00:49:52.009287 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 86.93s 2026-04-10 00:49:52.009292 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 9.76s 2026-04-10 00:49:52.009297 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.16s 2026-04-10 00:49:52.009302 | orchestrator | Check RabbitMQ service -------------------------------------------------- 4.17s 2026-04-10 00:49:52.009307 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.27s 2026-04-10 00:49:52.009311 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.21s 2026-04-10 00:49:52.009316 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.17s 2026-04-10 00:49:52.009321 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.02s 2026-04-10 00:49:52.009339 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.90s 2026-04-10 00:49:52.009350 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.79s 2026-04-10 00:49:52.009358 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.78s 2026-04-10 00:49:52.009366 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 1.73s 2026-04-10 00:49:52.009373 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.64s 2026-04-10 00:49:52.009381 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.59s 2026-04-10 00:49:52.009390 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.57s 2026-04-10 00:49:52.009398 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.55s 2026-04-10 00:49:52.009406 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.49s 2026-04-10 00:49:52.009414 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.29s 2026-04-10 00:49:52.009421 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.27s 2026-04-10 00:49:52.009426 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.00s 2026-04-10 00:49:52.009431 | orchestrator | 2026-04-10 00:49:52 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:49:55.040922 | orchestrator | 2026-04-10 00:49:55 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:49:55.042767 | orchestrator | 2026-04-10 00:49:55 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:49:55.046164 | orchestrator | 2026-04-10 00:49:55 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:49:55.046230 | orchestrator | 2026-04-10 00:49:55 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:49:58.093820 | orchestrator | 2026-04-10 00:49:58 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:49:58.096649 | orchestrator | 2026-04-10 00:49:58 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:49:58.098549 | orchestrator | 2026-04-10 00:49:58 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:49:58.099325 | orchestrator | 2026-04-10 00:49:58 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:50:01.152676 | orchestrator | 2026-04-10 00:50:01 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:50:01.154424 | orchestrator | 2026-04-10 00:50:01 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:50:01.156096 | orchestrator | 2026-04-10 00:50:01 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:50:01.156158 | orchestrator | 2026-04-10 00:50:01 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:50:04.196714 | orchestrator | 2026-04-10 00:50:04 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:50:04.198223 | orchestrator | 2026-04-10 00:50:04 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:50:04.199999 | orchestrator | 2026-04-10 00:50:04 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:50:04.200140 | orchestrator | 2026-04-10 00:50:04 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:50:07.242522 | orchestrator | 2026-04-10 00:50:07 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:50:07.242880 | orchestrator | 2026-04-10 00:50:07 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:50:07.243956 | orchestrator | 2026-04-10 00:50:07 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:50:07.243987 | orchestrator | 2026-04-10 00:50:07 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:50:10.274063 | orchestrator | 2026-04-10 00:50:10 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:50:10.274964 | orchestrator | 2026-04-10 00:50:10 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:50:10.277862 | orchestrator | 2026-04-10 00:50:10 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:50:10.277934 | orchestrator | 2026-04-10 00:50:10 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:50:13.325236 | orchestrator | 2026-04-10 00:50:13 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:50:13.325704 | orchestrator | 2026-04-10 00:50:13 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:50:13.326539 | orchestrator | 2026-04-10 00:50:13 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:50:13.326570 | orchestrator | 2026-04-10 00:50:13 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:50:16.363174 | orchestrator | 2026-04-10 00:50:16 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:50:16.364549 | orchestrator | 2026-04-10 00:50:16 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:50:16.366565 | orchestrator | 2026-04-10 00:50:16 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:50:16.366659 | orchestrator | 2026-04-10 00:50:16 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:50:19.406358 | orchestrator | 2026-04-10 00:50:19 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:50:19.406473 | orchestrator | 2026-04-10 00:50:19 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:50:19.406991 | orchestrator | 2026-04-10 00:50:19 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:50:19.407021 | orchestrator | 2026-04-10 00:50:19 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:50:22.434752 | orchestrator | 2026-04-10 00:50:22 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:50:22.435431 | orchestrator | 2026-04-10 00:50:22 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:50:22.437724 | orchestrator | 2026-04-10 00:50:22 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:50:22.437782 | orchestrator | 2026-04-10 00:50:22 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:50:25.466958 | orchestrator | 2026-04-10 00:50:25 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:50:25.468276 | orchestrator | 2026-04-10 00:50:25 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:50:25.470150 | orchestrator | 2026-04-10 00:50:25 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:50:25.470276 | orchestrator | 2026-04-10 00:50:25 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:50:28.517168 | orchestrator | 2026-04-10 00:50:28 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:50:28.517243 | orchestrator | 2026-04-10 00:50:28 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:50:28.517249 | orchestrator | 2026-04-10 00:50:28 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:50:28.517276 | orchestrator | 2026-04-10 00:50:28 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:50:31.562991 | orchestrator | 2026-04-10 00:50:31 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:50:31.564634 | orchestrator | 2026-04-10 00:50:31 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:50:31.566178 | orchestrator | 2026-04-10 00:50:31 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:50:31.566250 | orchestrator | 2026-04-10 00:50:31 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:50:34.597010 | orchestrator | 2026-04-10 00:50:34 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:50:34.597152 | orchestrator | 2026-04-10 00:50:34 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:50:34.598058 | orchestrator | 2026-04-10 00:50:34 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:50:34.598103 | orchestrator | 2026-04-10 00:50:34 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:50:37.632665 | orchestrator | 2026-04-10 00:50:37 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:50:37.633089 | orchestrator | 2026-04-10 00:50:37 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:50:37.634197 | orchestrator | 2026-04-10 00:50:37 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:50:37.634237 | orchestrator | 2026-04-10 00:50:37 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:50:40.666159 | orchestrator | 2026-04-10 00:50:40 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:50:40.667387 | orchestrator | 2026-04-10 00:50:40 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:50:40.668392 | orchestrator | 2026-04-10 00:50:40 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:50:40.668422 | orchestrator | 2026-04-10 00:50:40 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:50:43.719768 | orchestrator | 2026-04-10 00:50:43 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:50:43.720676 | orchestrator | 2026-04-10 00:50:43 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:50:43.722638 | orchestrator | 2026-04-10 00:50:43 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:50:43.722740 | orchestrator | 2026-04-10 00:50:43 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:50:46.759602 | orchestrator | 2026-04-10 00:50:46 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:50:46.760935 | orchestrator | 2026-04-10 00:50:46 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:50:46.762131 | orchestrator | 2026-04-10 00:50:46 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:50:46.762221 | orchestrator | 2026-04-10 00:50:46 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:50:49.797052 | orchestrator | 2026-04-10 00:50:49 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:50:49.802266 | orchestrator | 2026-04-10 00:50:49 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:50:49.804383 | orchestrator | 2026-04-10 00:50:49 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:50:49.804437 | orchestrator | 2026-04-10 00:50:49 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:50:52.841250 | orchestrator | 2026-04-10 00:50:52 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:50:52.841412 | orchestrator | 2026-04-10 00:50:52 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state STARTED 2026-04-10 00:50:52.842293 | orchestrator | 2026-04-10 00:50:52 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:50:52.842347 | orchestrator | 2026-04-10 00:50:52 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:50:55.871067 | orchestrator | 2026-04-10 00:50:55 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:50:55.872015 | orchestrator | 2026-04-10 00:50:55 | INFO  | Task 4e89dc27-0bba-426a-8384-cf3900b2ca72 is in state SUCCESS 2026-04-10 00:50:55.874079 | orchestrator | 2026-04-10 00:50:55.874122 | orchestrator | 2026-04-10 00:50:55.874130 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-10 00:50:55.874137 | orchestrator | 2026-04-10 00:50:55.874142 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-10 00:50:55.874148 | orchestrator | Friday 10 April 2026 00:48:17 +0000 (0:00:00.247) 0:00:00.247 ********** 2026-04-10 00:50:55.874154 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:50:55.874160 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:50:55.874166 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:50:55.874171 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:50:55.874177 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:50:55.874182 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:50:55.874187 | orchestrator | 2026-04-10 00:50:55.874192 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-10 00:50:55.874198 | orchestrator | Friday 10 April 2026 00:48:18 +0000 (0:00:00.605) 0:00:00.853 ********** 2026-04-10 00:50:55.874203 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-04-10 00:50:55.874209 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-04-10 00:50:55.874215 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-04-10 00:50:55.874220 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-04-10 00:50:55.874225 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-04-10 00:50:55.874230 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-04-10 00:50:55.874235 | orchestrator | 2026-04-10 00:50:55.874240 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-04-10 00:50:55.874246 | orchestrator | 2026-04-10 00:50:55.874251 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-04-10 00:50:55.874256 | orchestrator | Friday 10 April 2026 00:48:19 +0000 (0:00:00.865) 0:00:01.718 ********** 2026-04-10 00:50:55.874262 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:50:55.874269 | orchestrator | 2026-04-10 00:50:55.874274 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-04-10 00:50:55.874279 | orchestrator | Friday 10 April 2026 00:48:20 +0000 (0:00:00.971) 0:00:02.690 ********** 2026-04-10 00:50:55.874287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.874294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.874324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.874335 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.874341 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.874346 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.874352 | orchestrator | 2026-04-10 00:50:55.874370 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-04-10 00:50:55.874379 | orchestrator | Friday 10 April 2026 00:48:21 +0000 (0:00:01.365) 0:00:04.055 ********** 2026-04-10 00:50:55.874392 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.874403 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.874410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.874419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.874427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.874442 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.874450 | orchestrator | 2026-04-10 00:50:55.874458 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-04-10 00:50:55.874465 | orchestrator | Friday 10 April 2026 00:48:23 +0000 (0:00:01.479) 0:00:05.534 ********** 2026-04-10 00:50:55.874478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.874487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.874501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.874510 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.874518 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.874585 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.874593 | orchestrator | 2026-04-10 00:50:55.874601 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-04-10 00:50:55.874609 | orchestrator | Friday 10 April 2026 00:48:24 +0000 (0:00:01.525) 0:00:07.060 ********** 2026-04-10 00:50:55.874624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.874633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.874642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.874656 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.874666 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.874676 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.874684 | orchestrator | 2026-04-10 00:50:55.874701 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-04-10 00:50:55.874710 | orchestrator | Friday 10 April 2026 00:48:26 +0000 (0:00:01.370) 0:00:08.430 ********** 2026-04-10 00:50:55.874719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.874728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.874736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.874750 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.874760 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.874768 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.874776 | orchestrator | 2026-04-10 00:50:55.874785 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-04-10 00:50:55.874793 | orchestrator | Friday 10 April 2026 00:48:27 +0000 (0:00:01.499) 0:00:09.930 ********** 2026-04-10 00:50:55.874801 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:50:55.874810 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:50:55.874818 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:50:55.874826 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:50:55.874834 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:50:55.874841 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:50:55.874849 | orchestrator | 2026-04-10 00:50:55.874857 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-04-10 00:50:55.874868 | orchestrator | Friday 10 April 2026 00:48:30 +0000 (0:00:02.508) 0:00:12.438 ********** 2026-04-10 00:50:55.874877 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-04-10 00:50:55.874885 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-04-10 00:50:55.874893 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-04-10 00:50:55.874901 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-04-10 00:50:55.874909 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-04-10 00:50:55.874917 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-04-10 00:50:55.874925 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-10 00:50:55.874933 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-10 00:50:55.874946 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-10 00:50:55.874954 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-10 00:50:55.874962 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-10 00:50:55.874970 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-10 00:50:55.874979 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-10 00:50:55.874995 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-10 00:50:55.875003 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-10 00:50:55.875011 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-10 00:50:55.875019 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-10 00:50:55.875028 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-10 00:50:55.875036 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-10 00:50:55.875045 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-10 00:50:55.875053 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-10 00:50:55.875060 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-10 00:50:55.875068 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-10 00:50:55.875076 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-10 00:50:55.875085 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-10 00:50:55.875093 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-10 00:50:55.875101 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-10 00:50:55.875109 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-10 00:50:55.875117 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-10 00:50:55.875125 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-10 00:50:55.875133 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-10 00:50:55.875141 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-10 00:50:55.875149 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-10 00:50:55.875157 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-10 00:50:55.875165 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-10 00:50:55.875174 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-10 00:50:55.875182 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-10 00:50:55.875193 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-10 00:50:55.875202 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-10 00:50:55.875210 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-10 00:50:55.875218 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-10 00:50:55.875231 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-10 00:50:55.875239 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-04-10 00:50:55.875248 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-04-10 00:50:55.875261 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-04-10 00:50:55.875269 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-04-10 00:50:55.875278 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-04-10 00:50:55.875286 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-04-10 00:50:55.875294 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-10 00:50:55.875302 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-10 00:50:55.875310 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-10 00:50:55.875319 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-10 00:50:55.875327 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-10 00:50:55.875334 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-10 00:50:55.875342 | orchestrator | 2026-04-10 00:50:55.875350 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-10 00:50:55.875359 | orchestrator | Friday 10 April 2026 00:48:50 +0000 (0:00:20.431) 0:00:32.870 ********** 2026-04-10 00:50:55.875367 | orchestrator | 2026-04-10 00:50:55.875375 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-10 00:50:55.875383 | orchestrator | Friday 10 April 2026 00:48:50 +0000 (0:00:00.061) 0:00:32.931 ********** 2026-04-10 00:50:55.875391 | orchestrator | 2026-04-10 00:50:55.875399 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-10 00:50:55.875407 | orchestrator | Friday 10 April 2026 00:48:50 +0000 (0:00:00.067) 0:00:32.998 ********** 2026-04-10 00:50:55.875415 | orchestrator | 2026-04-10 00:50:55.875423 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-10 00:50:55.875430 | orchestrator | Friday 10 April 2026 00:48:50 +0000 (0:00:00.063) 0:00:33.062 ********** 2026-04-10 00:50:55.875438 | orchestrator | 2026-04-10 00:50:55.875446 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-10 00:50:55.875455 | orchestrator | Friday 10 April 2026 00:48:50 +0000 (0:00:00.063) 0:00:33.125 ********** 2026-04-10 00:50:55.875463 | orchestrator | 2026-04-10 00:50:55.875471 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-10 00:50:55.875479 | orchestrator | Friday 10 April 2026 00:48:50 +0000 (0:00:00.063) 0:00:33.189 ********** 2026-04-10 00:50:55.875487 | orchestrator | 2026-04-10 00:50:55.875496 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-04-10 00:50:55.875503 | orchestrator | Friday 10 April 2026 00:48:50 +0000 (0:00:00.064) 0:00:33.253 ********** 2026-04-10 00:50:55.875511 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:50:55.875577 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:50:55.875588 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:50:55.875596 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:50:55.875613 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:50:55.875622 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:50:55.875629 | orchestrator | 2026-04-10 00:50:55.875637 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-04-10 00:50:55.875645 | orchestrator | Friday 10 April 2026 00:48:52 +0000 (0:00:01.886) 0:00:35.140 ********** 2026-04-10 00:50:55.875654 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:50:55.875662 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:50:55.875671 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:50:55.875679 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:50:55.875686 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:50:55.875695 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:50:55.875702 | orchestrator | 2026-04-10 00:50:55.875711 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-04-10 00:50:55.875718 | orchestrator | 2026-04-10 00:50:55.875726 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-10 00:50:55.875738 | orchestrator | Friday 10 April 2026 00:49:26 +0000 (0:00:33.155) 0:01:08.296 ********** 2026-04-10 00:50:55.875747 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:50:55.875755 | orchestrator | 2026-04-10 00:50:55.875763 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-10 00:50:55.875772 | orchestrator | Friday 10 April 2026 00:49:26 +0000 (0:00:00.439) 0:01:08.736 ********** 2026-04-10 00:50:55.875780 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:50:55.875788 | orchestrator | 2026-04-10 00:50:55.875796 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-04-10 00:50:55.875804 | orchestrator | Friday 10 April 2026 00:49:27 +0000 (0:00:00.578) 0:01:09.314 ********** 2026-04-10 00:50:55.875812 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:50:55.875820 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:50:55.875828 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:50:55.875837 | orchestrator | 2026-04-10 00:50:55.875842 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-04-10 00:50:55.875847 | orchestrator | Friday 10 April 2026 00:49:27 +0000 (0:00:00.817) 0:01:10.131 ********** 2026-04-10 00:50:55.875852 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:50:55.875857 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:50:55.875862 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:50:55.875871 | orchestrator | 2026-04-10 00:50:55.875878 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-04-10 00:50:55.875886 | orchestrator | Friday 10 April 2026 00:49:28 +0000 (0:00:00.281) 0:01:10.413 ********** 2026-04-10 00:50:55.875894 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:50:55.875902 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:50:55.875910 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:50:55.875918 | orchestrator | 2026-04-10 00:50:55.875926 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-04-10 00:50:55.875945 | orchestrator | Friday 10 April 2026 00:49:28 +0000 (0:00:00.370) 0:01:10.783 ********** 2026-04-10 00:50:55.875961 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:50:55.875969 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:50:55.875977 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:50:55.875985 | orchestrator | 2026-04-10 00:50:55.875993 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-04-10 00:50:55.876001 | orchestrator | Friday 10 April 2026 00:49:28 +0000 (0:00:00.334) 0:01:11.118 ********** 2026-04-10 00:50:55.876009 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:50:55.876017 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:50:55.876025 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:50:55.876032 | orchestrator | 2026-04-10 00:50:55.876040 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-04-10 00:50:55.876048 | orchestrator | Friday 10 April 2026 00:49:29 +0000 (0:00:00.272) 0:01:11.391 ********** 2026-04-10 00:50:55.876062 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:50:55.876071 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:50:55.876081 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:50:55.876086 | orchestrator | 2026-04-10 00:50:55.876091 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-04-10 00:50:55.876095 | orchestrator | Friday 10 April 2026 00:49:29 +0000 (0:00:00.268) 0:01:11.659 ********** 2026-04-10 00:50:55.876100 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:50:55.876105 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:50:55.876110 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:50:55.876115 | orchestrator | 2026-04-10 00:50:55.876120 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-04-10 00:50:55.876125 | orchestrator | Friday 10 April 2026 00:49:29 +0000 (0:00:00.368) 0:01:12.028 ********** 2026-04-10 00:50:55.876129 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:50:55.876134 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:50:55.876139 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:50:55.876144 | orchestrator | 2026-04-10 00:50:55.876149 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-04-10 00:50:55.876154 | orchestrator | Friday 10 April 2026 00:49:30 +0000 (0:00:00.269) 0:01:12.297 ********** 2026-04-10 00:50:55.876158 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:50:55.876163 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:50:55.876168 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:50:55.876173 | orchestrator | 2026-04-10 00:50:55.876177 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-04-10 00:50:55.876182 | orchestrator | Friday 10 April 2026 00:49:30 +0000 (0:00:00.279) 0:01:12.577 ********** 2026-04-10 00:50:55.876187 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:50:55.876192 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:50:55.876197 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:50:55.876202 | orchestrator | 2026-04-10 00:50:55.876206 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-04-10 00:50:55.876211 | orchestrator | Friday 10 April 2026 00:49:30 +0000 (0:00:00.305) 0:01:12.882 ********** 2026-04-10 00:50:55.876216 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:50:55.876221 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:50:55.876225 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:50:55.876230 | orchestrator | 2026-04-10 00:50:55.876235 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-04-10 00:50:55.876240 | orchestrator | Friday 10 April 2026 00:49:30 +0000 (0:00:00.291) 0:01:13.174 ********** 2026-04-10 00:50:55.876245 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:50:55.876250 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:50:55.876255 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:50:55.876260 | orchestrator | 2026-04-10 00:50:55.876265 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-04-10 00:50:55.876270 | orchestrator | Friday 10 April 2026 00:49:31 +0000 (0:00:00.524) 0:01:13.698 ********** 2026-04-10 00:50:55.876275 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:50:55.876280 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:50:55.876285 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:50:55.876290 | orchestrator | 2026-04-10 00:50:55.876295 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-04-10 00:50:55.876300 | orchestrator | Friday 10 April 2026 00:49:31 +0000 (0:00:00.284) 0:01:13.982 ********** 2026-04-10 00:50:55.876311 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:50:55.876316 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:50:55.876321 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:50:55.876326 | orchestrator | 2026-04-10 00:50:55.876331 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-04-10 00:50:55.876336 | orchestrator | Friday 10 April 2026 00:49:32 +0000 (0:00:00.291) 0:01:14.274 ********** 2026-04-10 00:50:55.876341 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:50:55.876350 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:50:55.876355 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:50:55.876360 | orchestrator | 2026-04-10 00:50:55.876365 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-04-10 00:50:55.876370 | orchestrator | Friday 10 April 2026 00:49:32 +0000 (0:00:00.314) 0:01:14.589 ********** 2026-04-10 00:50:55.876376 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:50:55.876380 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:50:55.876385 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:50:55.876390 | orchestrator | 2026-04-10 00:50:55.876395 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-04-10 00:50:55.876400 | orchestrator | Friday 10 April 2026 00:49:32 +0000 (0:00:00.473) 0:01:15.062 ********** 2026-04-10 00:50:55.876405 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:50:55.876410 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:50:55.876420 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:50:55.876426 | orchestrator | 2026-04-10 00:50:55.876430 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-10 00:50:55.876435 | orchestrator | Friday 10 April 2026 00:49:33 +0000 (0:00:00.293) 0:01:15.356 ********** 2026-04-10 00:50:55.876440 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:50:55.876446 | orchestrator | 2026-04-10 00:50:55.876451 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-04-10 00:50:55.876456 | orchestrator | Friday 10 April 2026 00:49:33 +0000 (0:00:00.613) 0:01:15.969 ********** 2026-04-10 00:50:55.876461 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:50:55.876465 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:50:55.876470 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:50:55.876475 | orchestrator | 2026-04-10 00:50:55.876480 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-04-10 00:50:55.876485 | orchestrator | Friday 10 April 2026 00:49:34 +0000 (0:00:00.712) 0:01:16.681 ********** 2026-04-10 00:50:55.876490 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:50:55.876494 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:50:55.876499 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:50:55.876504 | orchestrator | 2026-04-10 00:50:55.876509 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-04-10 00:50:55.876514 | orchestrator | Friday 10 April 2026 00:49:34 +0000 (0:00:00.418) 0:01:17.100 ********** 2026-04-10 00:50:55.876538 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:50:55.876544 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:50:55.876549 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:50:55.876554 | orchestrator | 2026-04-10 00:50:55.876558 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-04-10 00:50:55.876563 | orchestrator | Friday 10 April 2026 00:49:35 +0000 (0:00:00.289) 0:01:17.389 ********** 2026-04-10 00:50:55.876568 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:50:55.876573 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:50:55.876578 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:50:55.876583 | orchestrator | 2026-04-10 00:50:55.876588 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-04-10 00:50:55.876593 | orchestrator | Friday 10 April 2026 00:49:35 +0000 (0:00:00.293) 0:01:17.682 ********** 2026-04-10 00:50:55.876598 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:50:55.876603 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:50:55.876608 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:50:55.876613 | orchestrator | 2026-04-10 00:50:55.876617 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-04-10 00:50:55.876622 | orchestrator | Friday 10 April 2026 00:49:35 +0000 (0:00:00.422) 0:01:18.104 ********** 2026-04-10 00:50:55.876627 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:50:55.876632 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:50:55.876642 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:50:55.876647 | orchestrator | 2026-04-10 00:50:55.876652 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-04-10 00:50:55.876657 | orchestrator | Friday 10 April 2026 00:49:36 +0000 (0:00:00.324) 0:01:18.429 ********** 2026-04-10 00:50:55.876662 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:50:55.876667 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:50:55.876671 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:50:55.876676 | orchestrator | 2026-04-10 00:50:55.876681 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-04-10 00:50:55.876686 | orchestrator | Friday 10 April 2026 00:49:36 +0000 (0:00:00.342) 0:01:18.772 ********** 2026-04-10 00:50:55.876691 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:50:55.876696 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:50:55.876700 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:50:55.876705 | orchestrator | 2026-04-10 00:50:55.876710 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-10 00:50:55.876715 | orchestrator | Friday 10 April 2026 00:49:36 +0000 (0:00:00.256) 0:01:19.028 ********** 2026-04-10 00:50:55.876721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.876731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.876737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.876747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.876754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.876759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.876765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.876774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.876779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.876784 | orchestrator | 2026-04-10 00:50:55.876789 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-10 00:50:55.876794 | orchestrator | Friday 10 April 2026 00:49:38 +0000 (0:00:01.714) 0:01:20.742 ********** 2026-04-10 00:50:55.876799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.876804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.876812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.876818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.876826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.876831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.876837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.876846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.876851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.876856 | orchestrator | 2026-04-10 00:50:55.876861 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-04-10 00:50:55.876866 | orchestrator | Friday 10 April 2026 00:49:42 +0000 (0:00:03.918) 0:01:24.661 ********** 2026-04-10 00:50:55.876871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.876877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.876882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.876896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.876902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.876912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.876917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.876923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.876932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.876937 | orchestrator | 2026-04-10 00:50:55.876942 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-10 00:50:55.876947 | orchestrator | Friday 10 April 2026 00:49:44 +0000 (0:00:02.467) 0:01:27.129 ********** 2026-04-10 00:50:55.876952 | orchestrator | 2026-04-10 00:50:55.876957 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-10 00:50:55.876962 | orchestrator | Friday 10 April 2026 00:49:45 +0000 (0:00:00.171) 0:01:27.300 ********** 2026-04-10 00:50:55.876967 | orchestrator | 2026-04-10 00:50:55.876972 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-10 00:50:55.876976 | orchestrator | Friday 10 April 2026 00:49:45 +0000 (0:00:00.152) 0:01:27.453 ********** 2026-04-10 00:50:55.876982 | orchestrator | 2026-04-10 00:50:55.876986 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-10 00:50:55.876991 | orchestrator | Friday 10 April 2026 00:49:45 +0000 (0:00:00.069) 0:01:27.522 ********** 2026-04-10 00:50:55.876996 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:50:55.877001 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:50:55.877006 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:50:55.877010 | orchestrator | 2026-04-10 00:50:55.877015 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-10 00:50:55.877020 | orchestrator | Friday 10 April 2026 00:50:05 +0000 (0:00:20.707) 0:01:48.229 ********** 2026-04-10 00:50:55.877025 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:50:55.877030 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:50:55.877035 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:50:55.877040 | orchestrator | 2026-04-10 00:50:55.877045 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-10 00:50:55.877050 | orchestrator | Friday 10 April 2026 00:50:09 +0000 (0:00:03.240) 0:01:51.470 ********** 2026-04-10 00:50:55.877055 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:50:55.877060 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:50:55.877065 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:50:55.877070 | orchestrator | 2026-04-10 00:50:55.877075 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-10 00:50:55.877080 | orchestrator | Friday 10 April 2026 00:50:16 +0000 (0:00:07.599) 0:01:59.069 ********** 2026-04-10 00:50:55.877084 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:50:55.877089 | orchestrator | 2026-04-10 00:50:55.877094 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-10 00:50:55.877099 | orchestrator | Friday 10 April 2026 00:50:16 +0000 (0:00:00.179) 0:01:59.248 ********** 2026-04-10 00:50:55.877104 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:50:55.877109 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:50:55.877114 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:50:55.877118 | orchestrator | 2026-04-10 00:50:55.877123 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-10 00:50:55.877129 | orchestrator | Friday 10 April 2026 00:50:17 +0000 (0:00:00.869) 0:02:00.117 ********** 2026-04-10 00:50:55.877133 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:50:55.877139 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:50:55.877144 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:50:55.877153 | orchestrator | 2026-04-10 00:50:55.877158 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-10 00:50:55.877162 | orchestrator | Friday 10 April 2026 00:50:18 +0000 (0:00:00.789) 0:02:00.907 ********** 2026-04-10 00:50:55.877167 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:50:55.877172 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:50:55.877177 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:50:55.877182 | orchestrator | 2026-04-10 00:50:55.877205 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-10 00:50:55.877210 | orchestrator | Friday 10 April 2026 00:50:19 +0000 (0:00:00.726) 0:02:01.634 ********** 2026-04-10 00:50:55.877216 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:50:55.877221 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:50:55.877226 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:50:55.877231 | orchestrator | 2026-04-10 00:50:55.877236 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-10 00:50:55.877241 | orchestrator | Friday 10 April 2026 00:50:20 +0000 (0:00:00.670) 0:02:02.304 ********** 2026-04-10 00:50:55.877246 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:50:55.877251 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:50:55.877260 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:50:55.877265 | orchestrator | 2026-04-10 00:50:55.877271 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-10 00:50:55.877275 | orchestrator | Friday 10 April 2026 00:50:20 +0000 (0:00:00.751) 0:02:03.055 ********** 2026-04-10 00:50:55.877281 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:50:55.877286 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:50:55.877291 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:50:55.877295 | orchestrator | 2026-04-10 00:50:55.877300 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-04-10 00:50:55.877305 | orchestrator | Friday 10 April 2026 00:50:21 +0000 (0:00:00.821) 0:02:03.876 ********** 2026-04-10 00:50:55.877310 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:50:55.877315 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:50:55.877320 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:50:55.877324 | orchestrator | 2026-04-10 00:50:55.877329 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-10 00:50:55.877335 | orchestrator | Friday 10 April 2026 00:50:22 +0000 (0:00:00.394) 0:02:04.271 ********** 2026-04-10 00:50:55.877340 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.877345 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.877350 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.877355 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.877360 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.877371 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.877380 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.877385 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.877395 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.877401 | orchestrator | 2026-04-10 00:50:55.877406 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-10 00:50:55.877411 | orchestrator | Friday 10 April 2026 00:50:23 +0000 (0:00:01.431) 0:02:05.703 ********** 2026-04-10 00:50:55.877416 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.877421 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.877426 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.877431 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.877436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.877445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.877451 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.877459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.877465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.877471 | orchestrator | 2026-04-10 00:50:55.877476 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-04-10 00:50:55.877481 | orchestrator | Friday 10 April 2026 00:50:27 +0000 (0:00:03.875) 0:02:09.579 ********** 2026-04-10 00:50:55.877490 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.877496 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.877502 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.877507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.877512 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.877540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.877550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.877559 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.877572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 00:50:55.877580 | orchestrator | 2026-04-10 00:50:55.877589 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-10 00:50:55.877595 | orchestrator | Friday 10 April 2026 00:50:30 +0000 (0:00:03.049) 0:02:12.628 ********** 2026-04-10 00:50:55.877600 | orchestrator | 2026-04-10 00:50:55.877605 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-10 00:50:55.877610 | orchestrator | Friday 10 April 2026 00:50:30 +0000 (0:00:00.067) 0:02:12.696 ********** 2026-04-10 00:50:55.877615 | orchestrator | 2026-04-10 00:50:55.877620 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-10 00:50:55.877625 | orchestrator | Friday 10 April 2026 00:50:30 +0000 (0:00:00.204) 0:02:12.900 ********** 2026-04-10 00:50:55.877629 | orchestrator | 2026-04-10 00:50:55.877634 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-10 00:50:55.877639 | orchestrator | Friday 10 April 2026 00:50:30 +0000 (0:00:00.060) 0:02:12.961 ********** 2026-04-10 00:50:55.877645 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:50:55.877650 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:50:55.877655 | orchestrator | 2026-04-10 00:50:55.877665 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-10 00:50:55.877670 | orchestrator | Friday 10 April 2026 00:50:36 +0000 (0:00:06.119) 0:02:19.080 ********** 2026-04-10 00:50:55.877675 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:50:55.877680 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:50:55.877685 | orchestrator | 2026-04-10 00:50:55.877690 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-10 00:50:55.877695 | orchestrator | Friday 10 April 2026 00:50:42 +0000 (0:00:06.162) 0:02:25.243 ********** 2026-04-10 00:50:55.877700 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:50:55.877705 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:50:55.877710 | orchestrator | 2026-04-10 00:50:55.877714 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-10 00:50:55.877720 | orchestrator | Friday 10 April 2026 00:50:49 +0000 (0:00:06.152) 0:02:31.396 ********** 2026-04-10 00:50:55.877724 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:50:55.877734 | orchestrator | 2026-04-10 00:50:55.877739 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-10 00:50:55.877744 | orchestrator | Friday 10 April 2026 00:50:49 +0000 (0:00:00.109) 0:02:31.506 ********** 2026-04-10 00:50:55.877750 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:50:55.877755 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:50:55.877760 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:50:55.877765 | orchestrator | 2026-04-10 00:50:55.877769 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-10 00:50:55.877774 | orchestrator | Friday 10 April 2026 00:50:50 +0000 (0:00:00.775) 0:02:32.282 ********** 2026-04-10 00:50:55.877779 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:50:55.877784 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:50:55.877789 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:50:55.877794 | orchestrator | 2026-04-10 00:50:55.877799 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-10 00:50:55.877803 | orchestrator | Friday 10 April 2026 00:50:50 +0000 (0:00:00.590) 0:02:32.872 ********** 2026-04-10 00:50:55.877808 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:50:55.877813 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:50:55.877818 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:50:55.877823 | orchestrator | 2026-04-10 00:50:55.877828 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-10 00:50:55.877833 | orchestrator | Friday 10 April 2026 00:50:51 +0000 (0:00:00.770) 0:02:33.643 ********** 2026-04-10 00:50:55.877838 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:50:55.877843 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:50:55.877847 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:50:55.877852 | orchestrator | 2026-04-10 00:50:55.877858 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-10 00:50:55.877862 | orchestrator | Friday 10 April 2026 00:50:51 +0000 (0:00:00.577) 0:02:34.221 ********** 2026-04-10 00:50:55.877867 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:50:55.877872 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:50:55.877877 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:50:55.877882 | orchestrator | 2026-04-10 00:50:55.877887 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-10 00:50:55.877892 | orchestrator | Friday 10 April 2026 00:50:52 +0000 (0:00:00.678) 0:02:34.899 ********** 2026-04-10 00:50:55.877897 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:50:55.877902 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:50:55.877907 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:50:55.877912 | orchestrator | 2026-04-10 00:50:55.877917 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:50:55.877922 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-10 00:50:55.877927 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-10 00:50:55.877932 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-10 00:50:55.877937 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:50:55.877942 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:50:55.877950 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:50:55.877955 | orchestrator | 2026-04-10 00:50:55.877960 | orchestrator | 2026-04-10 00:50:55.877965 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:50:55.877974 | orchestrator | Friday 10 April 2026 00:50:53 +0000 (0:00:01.110) 0:02:36.010 ********** 2026-04-10 00:50:55.877979 | orchestrator | =============================================================================== 2026-04-10 00:50:55.877983 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 33.16s 2026-04-10 00:50:55.877988 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 26.83s 2026-04-10 00:50:55.877993 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.43s 2026-04-10 00:50:55.877999 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.75s 2026-04-10 00:50:55.878004 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.40s 2026-04-10 00:50:55.878009 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.92s 2026-04-10 00:50:55.878044 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.88s 2026-04-10 00:50:55.878054 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.05s 2026-04-10 00:50:55.878060 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.51s 2026-04-10 00:50:55.878065 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.47s 2026-04-10 00:50:55.878070 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.89s 2026-04-10 00:50:55.878075 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.71s 2026-04-10 00:50:55.878080 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.53s 2026-04-10 00:50:55.878085 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.50s 2026-04-10 00:50:55.878090 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.48s 2026-04-10 00:50:55.878095 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.43s 2026-04-10 00:50:55.878100 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.37s 2026-04-10 00:50:55.878105 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.37s 2026-04-10 00:50:55.878110 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.11s 2026-04-10 00:50:55.878115 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 0.97s 2026-04-10 00:50:55.878120 | orchestrator | 2026-04-10 00:50:55 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:50:55.878125 | orchestrator | 2026-04-10 00:50:55 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:50:58.900716 | orchestrator | 2026-04-10 00:50:58 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:50:58.901692 | orchestrator | 2026-04-10 00:50:58 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:50:58.901723 | orchestrator | 2026-04-10 00:50:58 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:51:01.942667 | orchestrator | 2026-04-10 00:51:01 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:51:01.945018 | orchestrator | 2026-04-10 00:51:01 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:51:01.945141 | orchestrator | 2026-04-10 00:51:01 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:51:04.993058 | orchestrator | 2026-04-10 00:51:04 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:51:04.993477 | orchestrator | 2026-04-10 00:51:04 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:51:04.993635 | orchestrator | 2026-04-10 00:51:04 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:51:08.031653 | orchestrator | 2026-04-10 00:51:08 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:51:08.032572 | orchestrator | 2026-04-10 00:51:08 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:51:08.032608 | orchestrator | 2026-04-10 00:51:08 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:51:11.062568 | orchestrator | 2026-04-10 00:51:11 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:51:11.063866 | orchestrator | 2026-04-10 00:51:11 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:51:11.063908 | orchestrator | 2026-04-10 00:51:11 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:51:14.104498 | orchestrator | 2026-04-10 00:51:14 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:51:14.106496 | orchestrator | 2026-04-10 00:51:14 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:51:14.106617 | orchestrator | 2026-04-10 00:51:14 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:51:17.151798 | orchestrator | 2026-04-10 00:51:17 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:51:17.151869 | orchestrator | 2026-04-10 00:51:17 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:51:17.151883 | orchestrator | 2026-04-10 00:51:17 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:51:20.183637 | orchestrator | 2026-04-10 00:51:20 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:51:20.184729 | orchestrator | 2026-04-10 00:51:20 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:51:20.184927 | orchestrator | 2026-04-10 00:51:20 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:51:23.226991 | orchestrator | 2026-04-10 00:51:23 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:51:23.228154 | orchestrator | 2026-04-10 00:51:23 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:51:23.228400 | orchestrator | 2026-04-10 00:51:23 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:51:26.273651 | orchestrator | 2026-04-10 00:51:26 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:51:26.274943 | orchestrator | 2026-04-10 00:51:26 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:51:26.275064 | orchestrator | 2026-04-10 00:51:26 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:51:29.322652 | orchestrator | 2026-04-10 00:51:29 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:51:29.325022 | orchestrator | 2026-04-10 00:51:29 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:51:29.325096 | orchestrator | 2026-04-10 00:51:29 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:51:32.374276 | orchestrator | 2026-04-10 00:51:32 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:51:32.377755 | orchestrator | 2026-04-10 00:51:32 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:51:32.377821 | orchestrator | 2026-04-10 00:51:32 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:51:35.415990 | orchestrator | 2026-04-10 00:51:35 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:51:35.418491 | orchestrator | 2026-04-10 00:51:35 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:51:35.418590 | orchestrator | 2026-04-10 00:51:35 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:51:38.463314 | orchestrator | 2026-04-10 00:51:38 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:51:38.465146 | orchestrator | 2026-04-10 00:51:38 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:51:38.465243 | orchestrator | 2026-04-10 00:51:38 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:51:41.499833 | orchestrator | 2026-04-10 00:51:41 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:51:41.499913 | orchestrator | 2026-04-10 00:51:41 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:51:41.499920 | orchestrator | 2026-04-10 00:51:41 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:51:44.540077 | orchestrator | 2026-04-10 00:51:44 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:51:44.541226 | orchestrator | 2026-04-10 00:51:44 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:51:44.541286 | orchestrator | 2026-04-10 00:51:44 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:51:47.595809 | orchestrator | 2026-04-10 00:51:47 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:51:47.596758 | orchestrator | 2026-04-10 00:51:47 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:51:47.596811 | orchestrator | 2026-04-10 00:51:47 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:51:50.641897 | orchestrator | 2026-04-10 00:51:50 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:51:50.642328 | orchestrator | 2026-04-10 00:51:50 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:51:50.642352 | orchestrator | 2026-04-10 00:51:50 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:51:53.680207 | orchestrator | 2026-04-10 00:51:53 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:51:53.681335 | orchestrator | 2026-04-10 00:51:53 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:51:53.681389 | orchestrator | 2026-04-10 00:51:53 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:51:56.721008 | orchestrator | 2026-04-10 00:51:56 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:51:56.721843 | orchestrator | 2026-04-10 00:51:56 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:51:56.721882 | orchestrator | 2026-04-10 00:51:56 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:51:59.757828 | orchestrator | 2026-04-10 00:51:59 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:51:59.761375 | orchestrator | 2026-04-10 00:51:59 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:51:59.761448 | orchestrator | 2026-04-10 00:51:59 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:52:02.792382 | orchestrator | 2026-04-10 00:52:02 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:52:02.792744 | orchestrator | 2026-04-10 00:52:02 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:52:02.792773 | orchestrator | 2026-04-10 00:52:02 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:52:05.841332 | orchestrator | 2026-04-10 00:52:05 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:52:05.843140 | orchestrator | 2026-04-10 00:52:05 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:52:05.843256 | orchestrator | 2026-04-10 00:52:05 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:52:08.892005 | orchestrator | 2026-04-10 00:52:08 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:52:08.894338 | orchestrator | 2026-04-10 00:52:08 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:52:08.894431 | orchestrator | 2026-04-10 00:52:08 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:52:11.938914 | orchestrator | 2026-04-10 00:52:11 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:52:11.940725 | orchestrator | 2026-04-10 00:52:11 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:52:11.941861 | orchestrator | 2026-04-10 00:52:11 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:52:14.987128 | orchestrator | 2026-04-10 00:52:14 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:52:14.987263 | orchestrator | 2026-04-10 00:52:14 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:52:14.987286 | orchestrator | 2026-04-10 00:52:14 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:52:18.022158 | orchestrator | 2026-04-10 00:52:18 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:52:18.022900 | orchestrator | 2026-04-10 00:52:18 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:52:18.022975 | orchestrator | 2026-04-10 00:52:18 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:52:21.067026 | orchestrator | 2026-04-10 00:52:21 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:52:21.067186 | orchestrator | 2026-04-10 00:52:21 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:52:21.067203 | orchestrator | 2026-04-10 00:52:21 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:52:24.108090 | orchestrator | 2026-04-10 00:52:24 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:52:24.108988 | orchestrator | 2026-04-10 00:52:24 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:52:24.109052 | orchestrator | 2026-04-10 00:52:24 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:52:27.157270 | orchestrator | 2026-04-10 00:52:27 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:52:27.159866 | orchestrator | 2026-04-10 00:52:27 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:52:27.159968 | orchestrator | 2026-04-10 00:52:27 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:52:30.196094 | orchestrator | 2026-04-10 00:52:30 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:52:30.196949 | orchestrator | 2026-04-10 00:52:30 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:52:30.197001 | orchestrator | 2026-04-10 00:52:30 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:52:33.253891 | orchestrator | 2026-04-10 00:52:33 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:52:33.256098 | orchestrator | 2026-04-10 00:52:33 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:52:33.256256 | orchestrator | 2026-04-10 00:52:33 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:52:36.302302 | orchestrator | 2026-04-10 00:52:36 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:52:36.303042 | orchestrator | 2026-04-10 00:52:36 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:52:36.305674 | orchestrator | 2026-04-10 00:52:36 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:52:39.340243 | orchestrator | 2026-04-10 00:52:39 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:52:39.344011 | orchestrator | 2026-04-10 00:52:39 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:52:39.344082 | orchestrator | 2026-04-10 00:52:39 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:52:42.371207 | orchestrator | 2026-04-10 00:52:42 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:52:42.371749 | orchestrator | 2026-04-10 00:52:42 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:52:42.371784 | orchestrator | 2026-04-10 00:52:42 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:52:45.411656 | orchestrator | 2026-04-10 00:52:45 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:52:45.412765 | orchestrator | 2026-04-10 00:52:45 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:52:45.412823 | orchestrator | 2026-04-10 00:52:45 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:52:48.452280 | orchestrator | 2026-04-10 00:52:48 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:52:48.454441 | orchestrator | 2026-04-10 00:52:48 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:52:48.454707 | orchestrator | 2026-04-10 00:52:48 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:52:51.488042 | orchestrator | 2026-04-10 00:52:51 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:52:51.489456 | orchestrator | 2026-04-10 00:52:51 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:52:51.489517 | orchestrator | 2026-04-10 00:52:51 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:52:54.519872 | orchestrator | 2026-04-10 00:52:54 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:52:54.520283 | orchestrator | 2026-04-10 00:52:54 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:52:54.520333 | orchestrator | 2026-04-10 00:52:54 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:52:57.582547 | orchestrator | 2026-04-10 00:52:57 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:52:57.584536 | orchestrator | 2026-04-10 00:52:57 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:52:57.584612 | orchestrator | 2026-04-10 00:52:57 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:53:00.644543 | orchestrator | 2026-04-10 00:53:00 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:53:00.645362 | orchestrator | 2026-04-10 00:53:00 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:53:00.645407 | orchestrator | 2026-04-10 00:53:00 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:53:03.680851 | orchestrator | 2026-04-10 00:53:03 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:53:03.681989 | orchestrator | 2026-04-10 00:53:03 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state STARTED 2026-04-10 00:53:03.682118 | orchestrator | 2026-04-10 00:53:03 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:53:06.723950 | orchestrator | 2026-04-10 00:53:06 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:53:06.730333 | orchestrator | 2026-04-10 00:53:06 | INFO  | Task 44e865e8-6124-4ced-ba79-e2754b71a1ff is in state SUCCESS 2026-04-10 00:53:06.731868 | orchestrator | 2026-04-10 00:53:06.732013 | orchestrator | 2026-04-10 00:53:06.732108 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-10 00:53:06.732133 | orchestrator | 2026-04-10 00:53:06.732145 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-10 00:53:06.732157 | orchestrator | Friday 10 April 2026 00:47:09 +0000 (0:00:00.305) 0:00:00.305 ********** 2026-04-10 00:53:06.732168 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:53:06.732180 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:53:06.732191 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:53:06.732202 | orchestrator | 2026-04-10 00:53:06.732221 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-10 00:53:06.732239 | orchestrator | Friday 10 April 2026 00:47:09 +0000 (0:00:00.301) 0:00:00.607 ********** 2026-04-10 00:53:06.732257 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-04-10 00:53:06.732273 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-04-10 00:53:06.732290 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-04-10 00:53:06.732346 | orchestrator | 2026-04-10 00:53:06.732365 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-04-10 00:53:06.732383 | orchestrator | 2026-04-10 00:53:06.732402 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-10 00:53:06.732450 | orchestrator | Friday 10 April 2026 00:47:10 +0000 (0:00:00.380) 0:00:00.988 ********** 2026-04-10 00:53:06.732467 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:53:06.732485 | orchestrator | 2026-04-10 00:53:06.732506 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-04-10 00:53:06.732526 | orchestrator | Friday 10 April 2026 00:47:10 +0000 (0:00:00.762) 0:00:01.750 ********** 2026-04-10 00:53:06.732546 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:53:06.732562 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:53:06.732574 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:53:06.732585 | orchestrator | 2026-04-10 00:53:06.732692 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-10 00:53:06.732704 | orchestrator | Friday 10 April 2026 00:47:11 +0000 (0:00:01.015) 0:00:02.766 ********** 2026-04-10 00:53:06.732715 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:53:06.732726 | orchestrator | 2026-04-10 00:53:06.732737 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-04-10 00:53:06.732748 | orchestrator | Friday 10 April 2026 00:47:12 +0000 (0:00:00.666) 0:00:03.432 ********** 2026-04-10 00:53:06.732759 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:53:06.732770 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:53:06.732781 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:53:06.732792 | orchestrator | 2026-04-10 00:53:06.732803 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-04-10 00:53:06.732814 | orchestrator | Friday 10 April 2026 00:47:13 +0000 (0:00:00.879) 0:00:04.311 ********** 2026-04-10 00:53:06.732824 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-10 00:53:06.732835 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-10 00:53:06.732846 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-10 00:53:06.732858 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-10 00:53:06.732869 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-10 00:53:06.732880 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-10 00:53:06.732915 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-10 00:53:06.732926 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-10 00:53:06.732943 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-10 00:53:06.732960 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-10 00:53:06.732979 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-10 00:53:06.733007 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-10 00:53:06.733024 | orchestrator | 2026-04-10 00:53:06.733041 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-10 00:53:06.733059 | orchestrator | Friday 10 April 2026 00:47:18 +0000 (0:00:05.333) 0:00:09.645 ********** 2026-04-10 00:53:06.733078 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-10 00:53:06.733096 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-10 00:53:06.733114 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-10 00:53:06.733133 | orchestrator | 2026-04-10 00:53:06.733152 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-10 00:53:06.733170 | orchestrator | Friday 10 April 2026 00:47:19 +0000 (0:00:00.586) 0:00:10.232 ********** 2026-04-10 00:53:06.733190 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-10 00:53:06.733208 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-10 00:53:06.733227 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-10 00:53:06.733246 | orchestrator | 2026-04-10 00:53:06.733264 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-10 00:53:06.733391 | orchestrator | Friday 10 April 2026 00:47:20 +0000 (0:00:01.533) 0:00:11.765 ********** 2026-04-10 00:53:06.733459 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-04-10 00:53:06.733486 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.733531 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-04-10 00:53:06.733550 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.733599 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-04-10 00:53:06.733615 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.733626 | orchestrator | 2026-04-10 00:53:06.733637 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-04-10 00:53:06.733648 | orchestrator | Friday 10 April 2026 00:47:21 +0000 (0:00:00.585) 0:00:12.351 ********** 2026-04-10 00:53:06.733663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-10 00:53:06.733687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-10 00:53:06.733724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-10 00:53:06.733751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-10 00:53:06.733773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-10 00:53:06.733827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-10 00:53:06.733849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-10 00:53:06.733868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-10 00:53:06.733888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-10 00:53:06.733920 | orchestrator | 2026-04-10 00:53:06.734243 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-04-10 00:53:06.734283 | orchestrator | Friday 10 April 2026 00:47:23 +0000 (0:00:01.713) 0:00:14.064 ********** 2026-04-10 00:53:06.734302 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.734320 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.734340 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.734358 | orchestrator | 2026-04-10 00:53:06.734377 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-04-10 00:53:06.734396 | orchestrator | Friday 10 April 2026 00:47:24 +0000 (0:00:00.857) 0:00:14.922 ********** 2026-04-10 00:53:06.734436 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-04-10 00:53:06.734450 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-04-10 00:53:06.734461 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-04-10 00:53:06.734472 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-04-10 00:53:06.734483 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-04-10 00:53:06.734493 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-04-10 00:53:06.734504 | orchestrator | 2026-04-10 00:53:06.734516 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-04-10 00:53:06.734527 | orchestrator | Friday 10 April 2026 00:47:25 +0000 (0:00:01.666) 0:00:16.588 ********** 2026-04-10 00:53:06.734538 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.734549 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.734559 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.734571 | orchestrator | 2026-04-10 00:53:06.734589 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-04-10 00:53:06.734605 | orchestrator | Friday 10 April 2026 00:47:27 +0000 (0:00:01.665) 0:00:18.254 ********** 2026-04-10 00:53:06.734734 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:53:06.734761 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:53:06.734780 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:53:06.734799 | orchestrator | 2026-04-10 00:53:06.734810 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-04-10 00:53:06.734821 | orchestrator | Friday 10 April 2026 00:47:29 +0000 (0:00:01.544) 0:00:19.799 ********** 2026-04-10 00:53:06.734834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-10 00:53:06.734971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-10 00:53:06.735002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-10 00:53:06.735039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-10 00:53:06.735061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8986c659d683b051ef7c37aca5ac8c169690867e', '__omit_place_holder__8986c659d683b051ef7c37aca5ac8c169690867e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-10 00:53:06.735082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-10 00:53:06.735110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-10 00:53:06.735133 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.735151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-10 00:53:06.735276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8986c659d683b051ef7c37aca5ac8c169690867e', '__omit_place_holder__8986c659d683b051ef7c37aca5ac8c169690867e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-10 00:53:06.735377 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.735402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-10 00:53:06.735463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-10 00:53:06.735485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8986c659d683b051ef7c37aca5ac8c169690867e', '__omit_place_holder__8986c659d683b051ef7c37aca5ac8c169690867e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-10 00:53:06.735504 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.735570 | orchestrator | 2026-04-10 00:53:06.735591 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-04-10 00:53:06.735611 | orchestrator | Friday 10 April 2026 00:47:29 +0000 (0:00:00.860) 0:00:20.659 ********** 2026-04-10 00:53:06.735632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-10 00:53:06.735652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-10 00:53:06.735761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-10 00:53:06.735808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-10 00:53:06.735829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-10 00:53:06.735847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8986c659d683b051ef7c37aca5ac8c169690867e', '__omit_place_holder__8986c659d683b051ef7c37aca5ac8c169690867e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-10 00:53:06.735867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-10 00:53:06.735959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-10 00:53:06.735972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8986c659d683b051ef7c37aca5ac8c169690867e', '__omit_place_holder__8986c659d683b051ef7c37aca5ac8c169690867e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-10 00:53:06.736016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-10 00:53:06.736029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-10 00:53:06.736041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8986c659d683b051ef7c37aca5ac8c169690867e', '__omit_place_holder__8986c659d683b051ef7c37aca5ac8c169690867e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-10 00:53:06.736052 | orchestrator | 2026-04-10 00:53:06.736063 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-04-10 00:53:06.736074 | orchestrator | Friday 10 April 2026 00:47:33 +0000 (0:00:03.794) 0:00:24.453 ********** 2026-04-10 00:53:06.736086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-10 00:53:06.736098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-10 00:53:06.736109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-10 00:53:06.736141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-10 00:53:06.736154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-10 00:53:06.736165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-10 00:53:06.736177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-10 00:53:06.736188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-10 00:53:06.736200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-10 00:53:06.736211 | orchestrator | 2026-04-10 00:53:06.736297 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-04-10 00:53:06.736309 | orchestrator | Friday 10 April 2026 00:47:37 +0000 (0:00:03.700) 0:00:28.153 ********** 2026-04-10 00:53:06.736321 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-10 00:53:06.736339 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-10 00:53:06.736350 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-10 00:53:06.736361 | orchestrator | 2026-04-10 00:53:06.736372 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-04-10 00:53:06.736382 | orchestrator | Friday 10 April 2026 00:47:39 +0000 (0:00:02.046) 0:00:30.199 ********** 2026-04-10 00:53:06.736391 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-10 00:53:06.736493 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-10 00:53:06.736509 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-10 00:53:06.736519 | orchestrator | 2026-04-10 00:53:06.736538 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-04-10 00:53:06.736559 | orchestrator | Friday 10 April 2026 00:47:45 +0000 (0:00:05.947) 0:00:36.147 ********** 2026-04-10 00:53:06.736624 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.736643 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.736660 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.736676 | orchestrator | 2026-04-10 00:53:06.736693 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-04-10 00:53:06.736710 | orchestrator | Friday 10 April 2026 00:47:46 +0000 (0:00:00.779) 0:00:36.927 ********** 2026-04-10 00:53:06.736727 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-10 00:53:06.736746 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-10 00:53:06.736762 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-10 00:53:06.736773 | orchestrator | 2026-04-10 00:53:06.736783 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-04-10 00:53:06.736792 | orchestrator | Friday 10 April 2026 00:47:48 +0000 (0:00:02.129) 0:00:39.057 ********** 2026-04-10 00:53:06.736803 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-10 00:53:06.736883 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-10 00:53:06.736895 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-10 00:53:06.736904 | orchestrator | 2026-04-10 00:53:06.736914 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-04-10 00:53:06.736924 | orchestrator | Friday 10 April 2026 00:47:50 +0000 (0:00:01.891) 0:00:40.948 ********** 2026-04-10 00:53:06.736934 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-04-10 00:53:06.736944 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-04-10 00:53:06.736954 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-04-10 00:53:06.736963 | orchestrator | 2026-04-10 00:53:06.736973 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-04-10 00:53:06.736983 | orchestrator | Friday 10 April 2026 00:47:51 +0000 (0:00:01.680) 0:00:42.629 ********** 2026-04-10 00:53:06.736992 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-04-10 00:53:06.737042 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-04-10 00:53:06.737052 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-04-10 00:53:06.737062 | orchestrator | 2026-04-10 00:53:06.737074 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-10 00:53:06.737107 | orchestrator | Friday 10 April 2026 00:47:54 +0000 (0:00:02.202) 0:00:44.832 ********** 2026-04-10 00:53:06.737129 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:53:06.737145 | orchestrator | 2026-04-10 00:53:06.737160 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-04-10 00:53:06.737176 | orchestrator | Friday 10 April 2026 00:47:54 +0000 (0:00:00.711) 0:00:45.544 ********** 2026-04-10 00:53:06.737231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-10 00:53:06.737253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-10 00:53:06.737289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-10 00:53:06.737306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-10 00:53:06.737538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-10 00:53:06.737559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-10 00:53:06.737591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-10 00:53:06.737612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-10 00:53:06.737628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-10 00:53:06.737645 | orchestrator | 2026-04-10 00:53:06.737659 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-04-10 00:53:06.737675 | orchestrator | Friday 10 April 2026 00:47:59 +0000 (0:00:04.272) 0:00:49.817 ********** 2026-04-10 00:53:06.737715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-10 00:53:06.737733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-10 00:53:06.737751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-10 00:53:06.737780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-10 00:53:06.737798 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.737816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-10 00:53:06.737834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-10 00:53:06.737896 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.737924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-10 00:53:06.737954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-10 00:53:06.737973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-10 00:53:06.737990 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.738006 | orchestrator | 2026-04-10 00:53:06.738205 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-04-10 00:53:06.738236 | orchestrator | Friday 10 April 2026 00:47:59 +0000 (0:00:00.847) 0:00:50.664 ********** 2026-04-10 00:53:06.738248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-10 00:53:06.738259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-10 00:53:06.738269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-10 00:53:06.738279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-10 00:53:06.738306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-10 00:53:06.738317 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.738327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-10 00:53:06.738337 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.738347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-10 00:53:06.738364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-10 00:53:06.738375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-10 00:53:06.738384 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.738394 | orchestrator | 2026-04-10 00:53:06.738404 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-10 00:53:06.738623 | orchestrator | Friday 10 April 2026 00:48:01 +0000 (0:00:01.147) 0:00:51.811 ********** 2026-04-10 00:53:06.738656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-10 00:53:06.738784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-10 00:53:06.738844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-10 00:53:06.738854 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.738863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-10 00:53:06.738886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-10 00:53:06.738895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-10 00:53:06.738903 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.738911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-10 00:53:06.738920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-10 00:53:06.738938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-10 00:53:06.738947 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.738955 | orchestrator | 2026-04-10 00:53:06.738963 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-10 00:53:06.738971 | orchestrator | Friday 10 April 2026 00:48:01 +0000 (0:00:00.511) 0:00:52.323 ********** 2026-04-10 00:53:06.738980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-10 00:53:06.738994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-10 00:53:06.739002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-10 00:53:06.739011 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.739019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-10 00:53:06.739027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-10 00:53:06.739036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-10 00:53:06.739044 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.739062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-10 00:53:06.739077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-10 00:53:06.739086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-10 00:53:06.739095 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.739103 | orchestrator | 2026-04-10 00:53:06.739111 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-10 00:53:06.739119 | orchestrator | Friday 10 April 2026 00:48:02 +0000 (0:00:00.774) 0:00:53.097 ********** 2026-04-10 00:53:06.739127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-10 00:53:06.739135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-10 00:53:06.739144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-10 00:53:06.739152 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.739171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-10 00:53:06.739186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-10 00:53:06.739195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-10 00:53:06.739203 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.739211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-10 00:53:06.739220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-10 00:53:06.739228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-10 00:53:06.739236 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.739244 | orchestrator | 2026-04-10 00:53:06.739252 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-04-10 00:53:06.739260 | orchestrator | Friday 10 April 2026 00:48:03 +0000 (0:00:00.969) 0:00:54.066 ********** 2026-04-10 00:53:06.739269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-10 00:53:06.739292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-10 00:53:06.739301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-10 00:53:06.739331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-10 00:53:06.739341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-10 00:53:06.739350 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.739359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-10 00:53:06.739367 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.739375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-10 00:53:06.739461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-10 00:53:06.739473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-10 00:53:06.739482 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.739490 | orchestrator | 2026-04-10 00:53:06.739498 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-04-10 00:53:06.739506 | orchestrator | Friday 10 April 2026 00:48:03 +0000 (0:00:00.634) 0:00:54.701 ********** 2026-04-10 00:53:06.739514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-10 00:53:06.739523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-10 00:53:06.739531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-10 00:53:06.739539 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.739548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-10 00:53:06.739562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-10 00:53:06.739589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-10 00:53:06.739598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-10 00:53:06.739607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-10 00:53:06.739615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-10 00:53:06.739624 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.739632 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.739640 | orchestrator | 2026-04-10 00:53:06.739648 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-04-10 00:53:06.739656 | orchestrator | Friday 10 April 2026 00:48:04 +0000 (0:00:00.530) 0:00:55.231 ********** 2026-04-10 00:53:06.739664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-10 00:53:06.739754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-10 00:53:06.739767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-10 00:53:06.739776 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.739790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-10 00:53:06.739799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-10 00:53:06.739808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-10 00:53:06.739816 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.739824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-10 00:53:06.739852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-10 00:53:06.739862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-10 00:53:06.739870 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.739878 | orchestrator | 2026-04-10 00:53:06.739886 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-04-10 00:53:06.739894 | orchestrator | Friday 10 April 2026 00:48:05 +0000 (0:00:01.204) 0:00:56.436 ********** 2026-04-10 00:53:06.739902 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-10 00:53:06.739935 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-10 00:53:06.739949 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-10 00:53:06.739958 | orchestrator | 2026-04-10 00:53:06.739989 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-04-10 00:53:06.739998 | orchestrator | Friday 10 April 2026 00:48:07 +0000 (0:00:01.682) 0:00:58.118 ********** 2026-04-10 00:53:06.740006 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-10 00:53:06.740014 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-10 00:53:06.740022 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-10 00:53:06.740030 | orchestrator | 2026-04-10 00:53:06.740038 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-04-10 00:53:06.740046 | orchestrator | Friday 10 April 2026 00:48:09 +0000 (0:00:01.825) 0:00:59.943 ********** 2026-04-10 00:53:06.740053 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-10 00:53:06.740061 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-10 00:53:06.740069 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-10 00:53:06.740077 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.740085 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-10 00:53:06.740093 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-10 00:53:06.740112 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.740121 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-10 00:53:06.740129 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.740136 | orchestrator | 2026-04-10 00:53:06.740144 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-04-10 00:53:06.740153 | orchestrator | Friday 10 April 2026 00:48:10 +0000 (0:00:00.858) 0:01:00.802 ********** 2026-04-10 00:53:06.740167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-10 00:53:06.740176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-10 00:53:06.740184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-10 00:53:06.740203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-10 00:53:06.740212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-10 00:53:06.740221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-10 00:53:06.740229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-10 00:53:06.740243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-10 00:53:06.740251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-10 00:53:06.740290 | orchestrator | 2026-04-10 00:53:06.740300 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-04-10 00:53:06.740308 | orchestrator | Friday 10 April 2026 00:48:12 +0000 (0:00:02.413) 0:01:03.216 ********** 2026-04-10 00:53:06.740316 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:53:06.740324 | orchestrator | 2026-04-10 00:53:06.740332 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-04-10 00:53:06.740339 | orchestrator | Friday 10 April 2026 00:48:12 +0000 (0:00:00.455) 0:01:03.671 ********** 2026-04-10 00:53:06.740352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-10 00:53:06.740378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-10 00:53:06.740387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.740401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.740491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-10 00:53:06.740501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-10 00:53:06.740509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.740572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.740583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-10 00:53:06.740598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-10 00:53:06.740607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.740615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.740624 | orchestrator | 2026-04-10 00:53:06.740632 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-04-10 00:53:06.740640 | orchestrator | Friday 10 April 2026 00:48:16 +0000 (0:00:03.842) 0:01:07.513 ********** 2026-04-10 00:53:06.740648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-10 00:53:06.740669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-10 00:53:06.740678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.740692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.740700 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.740709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-10 00:53:06.740717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-10 00:53:06.740726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.740751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.740760 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.740774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-10 00:53:06.740787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-10 00:53:06.740796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.740804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.740812 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.740821 | orchestrator | 2026-04-10 00:53:06.740829 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-04-10 00:53:06.740837 | orchestrator | Friday 10 April 2026 00:48:17 +0000 (0:00:00.711) 0:01:08.225 ********** 2026-04-10 00:53:06.740879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-10 00:53:06.740889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-10 00:53:06.740898 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.740907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-10 00:53:06.740915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-10 00:53:06.740923 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.740931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-10 00:53:06.740939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-10 00:53:06.740953 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.740961 | orchestrator | 2026-04-10 00:53:06.740974 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-04-10 00:53:06.740982 | orchestrator | Friday 10 April 2026 00:48:18 +0000 (0:00:00.829) 0:01:09.054 ********** 2026-04-10 00:53:06.741010 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.741019 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.741027 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.741035 | orchestrator | 2026-04-10 00:53:06.741043 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-04-10 00:53:06.741051 | orchestrator | Friday 10 April 2026 00:48:19 +0000 (0:00:01.315) 0:01:10.369 ********** 2026-04-10 00:53:06.741059 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.741067 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.741075 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.741083 | orchestrator | 2026-04-10 00:53:06.741091 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-04-10 00:53:06.741099 | orchestrator | Friday 10 April 2026 00:48:21 +0000 (0:00:02.004) 0:01:12.374 ********** 2026-04-10 00:53:06.741107 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:53:06.741115 | orchestrator | 2026-04-10 00:53:06.741123 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-04-10 00:53:06.741130 | orchestrator | Friday 10 April 2026 00:48:22 +0000 (0:00:00.688) 0:01:13.062 ********** 2026-04-10 00:53:06.741159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-10 00:53:06.741169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.741178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.741193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-10 00:53:06.741215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.741224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.741233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-10 00:53:06.741241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.741249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.741263 | orchestrator | 2026-04-10 00:53:06.741272 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-04-10 00:53:06.741280 | orchestrator | Friday 10 April 2026 00:48:25 +0000 (0:00:03.491) 0:01:16.554 ********** 2026-04-10 00:53:06.741297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-10 00:53:06.741306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.741314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.741322 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.741331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-10 00:53:06.741340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.741353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.741361 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.741378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-10 00:53:06.741387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.741396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.741404 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.741429 | orchestrator | 2026-04-10 00:53:06.741437 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-04-10 00:53:06.741445 | orchestrator | Friday 10 April 2026 00:48:26 +0000 (0:00:00.780) 0:01:17.334 ********** 2026-04-10 00:53:06.741453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-10 00:53:06.741462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-10 00:53:06.741471 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.741483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-10 00:53:06.741491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-10 00:53:06.741499 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.741506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-10 00:53:06.741514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-10 00:53:06.741522 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.741530 | orchestrator | 2026-04-10 00:53:06.741538 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-04-10 00:53:06.741546 | orchestrator | Friday 10 April 2026 00:48:27 +0000 (0:00:00.849) 0:01:18.184 ********** 2026-04-10 00:53:06.741554 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.741562 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.741570 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.741578 | orchestrator | 2026-04-10 00:53:06.741586 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-04-10 00:53:06.741593 | orchestrator | Friday 10 April 2026 00:48:28 +0000 (0:00:01.195) 0:01:19.379 ********** 2026-04-10 00:53:06.741605 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.741613 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.741621 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.741629 | orchestrator | 2026-04-10 00:53:06.741653 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-04-10 00:53:06.741662 | orchestrator | Friday 10 April 2026 00:48:31 +0000 (0:00:02.421) 0:01:21.801 ********** 2026-04-10 00:53:06.741670 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.741678 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.741685 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.741693 | orchestrator | 2026-04-10 00:53:06.741701 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-04-10 00:53:06.741709 | orchestrator | Friday 10 April 2026 00:48:31 +0000 (0:00:00.964) 0:01:22.765 ********** 2026-04-10 00:53:06.741717 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:53:06.741725 | orchestrator | 2026-04-10 00:53:06.741732 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-04-10 00:53:06.741740 | orchestrator | Friday 10 April 2026 00:48:33 +0000 (0:00:01.453) 0:01:24.219 ********** 2026-04-10 00:53:06.741749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-10 00:53:06.741758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-10 00:53:06.741772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-10 00:53:06.741781 | orchestrator | 2026-04-10 00:53:06.741789 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-04-10 00:53:06.741797 | orchestrator | Friday 10 April 2026 00:48:36 +0000 (0:00:02.803) 0:01:27.022 ********** 2026-04-10 00:53:06.741824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-10 00:53:06.741832 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.741841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-10 00:53:06.741849 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.741857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-10 00:53:06.741873 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.741881 | orchestrator | 2026-04-10 00:53:06.741888 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-04-10 00:53:06.741896 | orchestrator | Friday 10 April 2026 00:48:38 +0000 (0:00:02.217) 0:01:29.239 ********** 2026-04-10 00:53:06.741906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-10 00:53:06.741915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-10 00:53:06.741925 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.741933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-10 00:53:06.741942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-10 00:53:06.741950 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.741966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-10 00:53:06.741975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-10 00:53:06.741983 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.741991 | orchestrator | 2026-04-10 00:53:06.741999 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-04-10 00:53:06.742007 | orchestrator | Friday 10 April 2026 00:48:41 +0000 (0:00:02.765) 0:01:32.004 ********** 2026-04-10 00:53:06.742051 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.742061 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.742076 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.742095 | orchestrator | 2026-04-10 00:53:06.742103 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-04-10 00:53:06.742111 | orchestrator | Friday 10 April 2026 00:48:41 +0000 (0:00:00.518) 0:01:32.523 ********** 2026-04-10 00:53:06.742119 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.742127 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.742135 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.742143 | orchestrator | 2026-04-10 00:53:06.742150 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-04-10 00:53:06.742158 | orchestrator | Friday 10 April 2026 00:48:43 +0000 (0:00:01.355) 0:01:33.878 ********** 2026-04-10 00:53:06.742166 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:53:06.742247 | orchestrator | 2026-04-10 00:53:06.742255 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-04-10 00:53:06.742263 | orchestrator | Friday 10 April 2026 00:48:44 +0000 (0:00:00.986) 0:01:34.864 ********** 2026-04-10 00:53:06.742272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-10 00:53:06.742281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.742290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.742311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.742329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-10 00:53:06.742338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.742346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-10 00:53:06.742355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.742369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.742383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.742397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.742405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.742466 | orchestrator | 2026-04-10 00:53:06.742475 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-04-10 00:53:06.742483 | orchestrator | Friday 10 April 2026 00:48:47 +0000 (0:00:03.791) 0:01:38.656 ********** 2026-04-10 00:53:06.742492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-10 00:53:06.742505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-10 00:53:06.742525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.742534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.742543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-10 00:53:06.742552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.742560 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.742568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.742581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.742600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.742609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.742618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.742626 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.742634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.742643 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.742651 | orchestrator | 2026-04-10 00:53:06.742659 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-04-10 00:53:06.742667 | orchestrator | Friday 10 April 2026 00:48:48 +0000 (0:00:00.611) 0:01:39.267 ********** 2026-04-10 00:53:06.742675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-10 00:53:06.742684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-10 00:53:06.742698 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.742706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-10 00:53:06.742714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-10 00:53:06.742726 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.742738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-10 00:53:06.742747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-10 00:53:06.742755 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.742763 | orchestrator | 2026-04-10 00:53:06.742771 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-04-10 00:53:06.742779 | orchestrator | Friday 10 April 2026 00:48:49 +0000 (0:00:00.954) 0:01:40.221 ********** 2026-04-10 00:53:06.742787 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.742795 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.742803 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.742811 | orchestrator | 2026-04-10 00:53:06.742819 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-04-10 00:53:06.742827 | orchestrator | Friday 10 April 2026 00:48:50 +0000 (0:00:01.217) 0:01:41.439 ********** 2026-04-10 00:53:06.742834 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.742842 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.742850 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.742858 | orchestrator | 2026-04-10 00:53:06.742866 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-04-10 00:53:06.742939 | orchestrator | Friday 10 April 2026 00:48:52 +0000 (0:00:01.801) 0:01:43.241 ********** 2026-04-10 00:53:06.742947 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.742955 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.742962 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.742970 | orchestrator | 2026-04-10 00:53:06.742978 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-04-10 00:53:06.742986 | orchestrator | Friday 10 April 2026 00:48:52 +0000 (0:00:00.273) 0:01:43.515 ********** 2026-04-10 00:53:06.742994 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.743002 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.743010 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.743017 | orchestrator | 2026-04-10 00:53:06.743032 | orchestrator | TASK [include_role : designate] ************************************************ 2026-04-10 00:53:06.743041 | orchestrator | Friday 10 April 2026 00:48:53 +0000 (0:00:00.377) 0:01:43.893 ********** 2026-04-10 00:53:06.743049 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:53:06.743057 | orchestrator | 2026-04-10 00:53:06.743065 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-04-10 00:53:06.743073 | orchestrator | Friday 10 April 2026 00:48:54 +0000 (0:00:01.069) 0:01:44.962 ********** 2026-04-10 00:53:06.743082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-10 00:53:06.743097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-10 00:53:06.743110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.743124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.743133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.743141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.743150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.743164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-10 00:53:06.743173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-10 00:53:06.743190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.743199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.743207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.743216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.743229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.743238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-10 00:53:06.743254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-10 00:53:06.743263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.743272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.743280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.743293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.743302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.743310 | orchestrator | 2026-04-10 00:53:06.743318 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-04-10 00:53:06.743326 | orchestrator | Friday 10 April 2026 00:48:59 +0000 (0:00:05.361) 0:01:50.324 ********** 2026-04-10 00:53:06.743342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-10 00:53:06.743355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-10 00:53:06.743364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.743373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.743386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.743395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.743403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.743427 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.743445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-10 00:53:06.743453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-10 00:53:06.743462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.743476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-10 00:53:06.743484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-10 00:53:06.743492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.743509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.743517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.743526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.743539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.743547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.743555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.743564 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.743572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.743589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.743598 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.743606 | orchestrator | 2026-04-10 00:53:06.743614 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-04-10 00:53:06.743622 | orchestrator | Friday 10 April 2026 00:49:01 +0000 (0:00:01.660) 0:01:51.984 ********** 2026-04-10 00:53:06.743630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-10 00:53:06.743638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-10 00:53:06.743652 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.743660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-10 00:53:06.743668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-10 00:53:06.743676 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.743684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-10 00:53:06.743692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-10 00:53:06.743700 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.743708 | orchestrator | 2026-04-10 00:53:06.743716 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-04-10 00:53:06.743724 | orchestrator | Friday 10 April 2026 00:49:03 +0000 (0:00:02.080) 0:01:54.064 ********** 2026-04-10 00:53:06.743732 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.743739 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.743747 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.743755 | orchestrator | 2026-04-10 00:53:06.743763 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-04-10 00:53:06.743771 | orchestrator | Friday 10 April 2026 00:49:04 +0000 (0:00:01.133) 0:01:55.198 ********** 2026-04-10 00:53:06.743779 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.743787 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.743795 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.743802 | orchestrator | 2026-04-10 00:53:06.743810 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-04-10 00:53:06.743818 | orchestrator | Friday 10 April 2026 00:49:06 +0000 (0:00:02.003) 0:01:57.201 ********** 2026-04-10 00:53:06.743826 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.743834 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.743841 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.743896 | orchestrator | 2026-04-10 00:53:06.743905 | orchestrator | TASK [include_role : glance] *************************************************** 2026-04-10 00:53:06.743913 | orchestrator | Friday 10 April 2026 00:49:06 +0000 (0:00:00.244) 0:01:57.446 ********** 2026-04-10 00:53:06.743921 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:53:06.743929 | orchestrator | 2026-04-10 00:53:06.743936 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-04-10 00:53:06.743944 | orchestrator | Friday 10 April 2026 00:49:07 +0000 (0:00:00.917) 0:01:58.363 ********** 2026-04-10 00:53:06.743966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-10 00:53:06.743982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-10 00:53:06.744001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-10 00:53:06.744017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-10 00:53:06.744027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-10 00:53:06.744042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-10 00:53:06.744057 | orchestrator | 2026-04-10 00:53:06.744065 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-04-10 00:53:06.744088 | orchestrator | Friday 10 April 2026 00:49:12 +0000 (0:00:05.159) 0:02:03.523 ********** 2026-04-10 00:53:06.744098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-10 00:53:06.744117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-10 00:53:06.744135 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.744144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-10 00:53:06.744162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-10 00:53:06.744176 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.744185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-10 00:53:06.745078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-10 00:53:06.745188 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.745213 | orchestrator | 2026-04-10 00:53:06.745231 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-04-10 00:53:06.745248 | orchestrator | Friday 10 April 2026 00:49:16 +0000 (0:00:03.476) 0:02:07.000 ********** 2026-04-10 00:53:06.745265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-10 00:53:06.745282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-10 00:53:06.745298 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.745314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-10 00:53:06.745330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-10 00:53:06.745346 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.745362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-10 00:53:06.745378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-10 00:53:06.745405 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.745497 | orchestrator | 2026-04-10 00:53:06.745515 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-04-10 00:53:06.745529 | orchestrator | Friday 10 April 2026 00:49:19 +0000 (0:00:03.267) 0:02:10.267 ********** 2026-04-10 00:53:06.745545 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.745560 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.745574 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.745591 | orchestrator | 2026-04-10 00:53:06.745608 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-04-10 00:53:06.745626 | orchestrator | Friday 10 April 2026 00:49:20 +0000 (0:00:01.329) 0:02:11.597 ********** 2026-04-10 00:53:06.745655 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.745674 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.745711 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.745729 | orchestrator | 2026-04-10 00:53:06.745746 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-04-10 00:53:06.745764 | orchestrator | Friday 10 April 2026 00:49:22 +0000 (0:00:01.835) 0:02:13.433 ********** 2026-04-10 00:53:06.745781 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.745799 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.745815 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.745832 | orchestrator | 2026-04-10 00:53:06.745844 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-04-10 00:53:06.745854 | orchestrator | Friday 10 April 2026 00:49:22 +0000 (0:00:00.250) 0:02:13.684 ********** 2026-04-10 00:53:06.745864 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:53:06.745874 | orchestrator | 2026-04-10 00:53:06.745884 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-04-10 00:53:06.745893 | orchestrator | Friday 10 April 2026 00:49:23 +0000 (0:00:00.862) 0:02:14.546 ********** 2026-04-10 00:53:06.745905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-10 00:53:06.745919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-10 00:53:06.745930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-10 00:53:06.745950 | orchestrator | 2026-04-10 00:53:06.745960 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-04-10 00:53:06.745970 | orchestrator | Friday 10 April 2026 00:49:26 +0000 (0:00:02.906) 0:02:17.452 ********** 2026-04-10 00:53:06.745981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-10 00:53:06.746004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-10 00:53:06.746067 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.746081 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.746099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-10 00:53:06.746117 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.746134 | orchestrator | 2026-04-10 00:53:06.746150 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-04-10 00:53:06.746166 | orchestrator | Friday 10 April 2026 00:49:27 +0000 (0:00:00.349) 0:02:17.802 ********** 2026-04-10 00:53:06.746184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-10 00:53:06.746203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-10 00:53:06.746220 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.746237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-10 00:53:06.746254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-10 00:53:06.746283 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.746301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-10 00:53:06.746317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-10 00:53:06.746335 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.746345 | orchestrator | 2026-04-10 00:53:06.746356 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-04-10 00:53:06.746366 | orchestrator | Friday 10 April 2026 00:49:27 +0000 (0:00:00.727) 0:02:18.529 ********** 2026-04-10 00:53:06.746376 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.746386 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.746395 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.746405 | orchestrator | 2026-04-10 00:53:06.746450 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-04-10 00:53:06.746467 | orchestrator | Friday 10 April 2026 00:49:29 +0000 (0:00:01.279) 0:02:19.808 ********** 2026-04-10 00:53:06.746477 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.746488 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.746498 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.746507 | orchestrator | 2026-04-10 00:53:06.746517 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-04-10 00:53:06.746527 | orchestrator | Friday 10 April 2026 00:49:31 +0000 (0:00:01.971) 0:02:21.779 ********** 2026-04-10 00:53:06.746537 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.746546 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.746556 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.746566 | orchestrator | 2026-04-10 00:53:06.746576 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-04-10 00:53:06.746586 | orchestrator | Friday 10 April 2026 00:49:31 +0000 (0:00:00.301) 0:02:22.081 ********** 2026-04-10 00:53:06.746596 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:53:06.746606 | orchestrator | 2026-04-10 00:53:06.746616 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-04-10 00:53:06.746626 | orchestrator | Friday 10 April 2026 00:49:32 +0000 (0:00:01.238) 0:02:23.319 ********** 2026-04-10 00:53:06.746659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-10 00:53:06.746681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-10 00:53:06.746708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-10 00:53:06.746726 | orchestrator | 2026-04-10 00:53:06.746736 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-04-10 00:53:06.746747 | orchestrator | Friday 10 April 2026 00:49:35 +0000 (0:00:03.442) 0:02:26.762 ********** 2026-04-10 00:53:06.746769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-10 00:53:06.746781 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.746793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-10 00:53:06.746809 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.746833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-10 00:53:06.746845 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.746855 | orchestrator | 2026-04-10 00:53:06.746865 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-04-10 00:53:06.746875 | orchestrator | Friday 10 April 2026 00:49:36 +0000 (0:00:00.537) 0:02:27.300 ********** 2026-04-10 00:53:06.746897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-10 00:53:06.746910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-10 00:53:06.746923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-10 00:53:06.746934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-10 00:53:06.746944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-10 00:53:06.746956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-10 00:53:06.746966 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.746977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-10 00:53:06.746987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-10 00:53:06.746997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-10 00:53:06.747008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-10 00:53:06.747018 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.747032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-10 00:53:06.747048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-10 00:53:06.747065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-10 00:53:06.747075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-10 00:53:06.747085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-10 00:53:06.747095 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.747105 | orchestrator | 2026-04-10 00:53:06.747116 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-04-10 00:53:06.747126 | orchestrator | Friday 10 April 2026 00:49:37 +0000 (0:00:00.964) 0:02:28.264 ********** 2026-04-10 00:53:06.747137 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.747147 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.747157 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.747167 | orchestrator | 2026-04-10 00:53:06.747177 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-04-10 00:53:06.747187 | orchestrator | Friday 10 April 2026 00:49:39 +0000 (0:00:01.528) 0:02:29.793 ********** 2026-04-10 00:53:06.747197 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.747220 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.747230 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.747240 | orchestrator | 2026-04-10 00:53:06.747250 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-04-10 00:53:06.747260 | orchestrator | Friday 10 April 2026 00:49:41 +0000 (0:00:02.068) 0:02:31.862 ********** 2026-04-10 00:53:06.747269 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.747279 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.747289 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.747299 | orchestrator | 2026-04-10 00:53:06.747308 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-04-10 00:53:06.747318 | orchestrator | Friday 10 April 2026 00:49:41 +0000 (0:00:00.321) 0:02:32.184 ********** 2026-04-10 00:53:06.747328 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.747338 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.747348 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.747358 | orchestrator | 2026-04-10 00:53:06.747368 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-04-10 00:53:06.747378 | orchestrator | Friday 10 April 2026 00:49:41 +0000 (0:00:00.328) 0:02:32.512 ********** 2026-04-10 00:53:06.747389 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:53:06.747398 | orchestrator | 2026-04-10 00:53:06.747475 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-04-10 00:53:06.747503 | orchestrator | Friday 10 April 2026 00:49:43 +0000 (0:00:01.261) 0:02:33.774 ********** 2026-04-10 00:53:06.747521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-10 00:53:06.747571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-10 00:53:06.747591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-10 00:53:06.747609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-10 00:53:06.747628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-10 00:53:06.747646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-10 00:53:06.747682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-10 00:53:06.747712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-10 00:53:06.747729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-10 00:53:06.747746 | orchestrator | 2026-04-10 00:53:06.747756 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-04-10 00:53:06.747767 | orchestrator | Friday 10 April 2026 00:49:46 +0000 (0:00:03.501) 0:02:37.275 ********** 2026-04-10 00:53:06.747778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-10 00:53:06.747789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-10 00:53:06.747806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-10 00:53:06.747816 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.747841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-10 00:53:06.747852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-10 00:53:06.747863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-10 00:53:06.747873 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.747883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-10 00:53:06.747902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-10 00:53:06.747916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-10 00:53:06.747927 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.747937 | orchestrator | 2026-04-10 00:53:06.747952 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-04-10 00:53:06.747963 | orchestrator | Friday 10 April 2026 00:49:47 +0000 (0:00:00.544) 0:02:37.819 ********** 2026-04-10 00:53:06.747975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-10 00:53:06.747986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-10 00:53:06.747997 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.748007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-10 00:53:06.748017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-10 00:53:06.748027 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.748037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-10 00:53:06.748047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-10 00:53:06.748057 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.748067 | orchestrator | 2026-04-10 00:53:06.748077 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-04-10 00:53:06.748087 | orchestrator | Friday 10 April 2026 00:49:48 +0000 (0:00:01.030) 0:02:38.850 ********** 2026-04-10 00:53:06.748096 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.748113 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.748123 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.748133 | orchestrator | 2026-04-10 00:53:06.748143 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-04-10 00:53:06.748152 | orchestrator | Friday 10 April 2026 00:49:49 +0000 (0:00:01.292) 0:02:40.142 ********** 2026-04-10 00:53:06.748162 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.748172 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.748182 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.748191 | orchestrator | 2026-04-10 00:53:06.748201 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-04-10 00:53:06.748211 | orchestrator | Friday 10 April 2026 00:49:51 +0000 (0:00:01.668) 0:02:41.811 ********** 2026-04-10 00:53:06.748221 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.748231 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.748241 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.748251 | orchestrator | 2026-04-10 00:53:06.748260 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-04-10 00:53:06.748270 | orchestrator | Friday 10 April 2026 00:49:51 +0000 (0:00:00.279) 0:02:42.090 ********** 2026-04-10 00:53:06.748280 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:53:06.748290 | orchestrator | 2026-04-10 00:53:06.748299 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-04-10 00:53:06.748309 | orchestrator | Friday 10 April 2026 00:49:52 +0000 (0:00:01.046) 0:02:43.136 ********** 2026-04-10 00:53:06.748324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-10 00:53:06.748343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.748355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-10 00:53:06.748377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-10 00:53:06.748388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.748399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.748433 | orchestrator | 2026-04-10 00:53:06.748451 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-04-10 00:53:06.748461 | orchestrator | Friday 10 April 2026 00:49:55 +0000 (0:00:03.234) 0:02:46.370 ********** 2026-04-10 00:53:06.748478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-10 00:53:06.748489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.748519 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.748530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-10 00:53:06.748541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.748551 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.748571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-10 00:53:06.748583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.748593 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.748608 | orchestrator | 2026-04-10 00:53:06.748618 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-04-10 00:53:06.748628 | orchestrator | Friday 10 April 2026 00:49:56 +0000 (0:00:00.631) 0:02:47.001 ********** 2026-04-10 00:53:06.748639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-10 00:53:06.748650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-10 00:53:06.748660 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.748670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-10 00:53:06.748680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-10 00:53:06.748690 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.748700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-10 00:53:06.748711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-10 00:53:06.748720 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.748730 | orchestrator | 2026-04-10 00:53:06.748740 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-04-10 00:53:06.748750 | orchestrator | Friday 10 April 2026 00:49:57 +0000 (0:00:01.054) 0:02:48.056 ********** 2026-04-10 00:53:06.748760 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.748770 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.748780 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.748789 | orchestrator | 2026-04-10 00:53:06.748799 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-04-10 00:53:06.748808 | orchestrator | Friday 10 April 2026 00:49:58 +0000 (0:00:01.364) 0:02:49.421 ********** 2026-04-10 00:53:06.748818 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.748828 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.748838 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.748848 | orchestrator | 2026-04-10 00:53:06.748857 | orchestrator | TASK [include_role : manila] *************************************************** 2026-04-10 00:53:06.748867 | orchestrator | Friday 10 April 2026 00:50:00 +0000 (0:00:02.066) 0:02:51.488 ********** 2026-04-10 00:53:06.748877 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:53:06.748886 | orchestrator | 2026-04-10 00:53:06.748896 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-04-10 00:53:06.748906 | orchestrator | Friday 10 April 2026 00:50:01 +0000 (0:00:01.030) 0:02:52.518 ********** 2026-04-10 00:53:06.748928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-10 00:53:06.748948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.748958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.748969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.748980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-10 00:53:06.748991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.749005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.749495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.749523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-10 00:53:06.749534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.749545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.749555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.749566 | orchestrator | 2026-04-10 00:53:06.749576 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-04-10 00:53:06.749586 | orchestrator | Friday 10 April 2026 00:50:05 +0000 (0:00:04.231) 0:02:56.750 ********** 2026-04-10 00:53:06.749671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-10 00:53:06.749699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.749710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.749721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.749731 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.749742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-10 00:53:06.749752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.749767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.749847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.749863 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.749874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-10 00:53:06.749884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.749894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.749905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.749948 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.749960 | orchestrator | 2026-04-10 00:53:06.749970 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-04-10 00:53:06.749980 | orchestrator | Friday 10 April 2026 00:50:06 +0000 (0:00:01.015) 0:02:57.765 ********** 2026-04-10 00:53:06.749990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-10 00:53:06.750001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-10 00:53:06.750146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-10 00:53:06.750167 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.750177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-10 00:53:06.750187 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.750197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-10 00:53:06.750207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-10 00:53:06.750216 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.750226 | orchestrator | 2026-04-10 00:53:06.750236 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-04-10 00:53:06.750246 | orchestrator | Friday 10 April 2026 00:50:07 +0000 (0:00:00.839) 0:02:58.605 ********** 2026-04-10 00:53:06.750256 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.750266 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.750275 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.750285 | orchestrator | 2026-04-10 00:53:06.750295 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-04-10 00:53:06.750304 | orchestrator | Friday 10 April 2026 00:50:09 +0000 (0:00:01.270) 0:02:59.875 ********** 2026-04-10 00:53:06.750314 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.750324 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.750334 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.750344 | orchestrator | 2026-04-10 00:53:06.750353 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-04-10 00:53:06.750363 | orchestrator | Friday 10 April 2026 00:50:11 +0000 (0:00:02.224) 0:03:02.099 ********** 2026-04-10 00:53:06.750372 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:53:06.750382 | orchestrator | 2026-04-10 00:53:06.750392 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-04-10 00:53:06.750402 | orchestrator | Friday 10 April 2026 00:50:12 +0000 (0:00:01.156) 0:03:03.256 ********** 2026-04-10 00:53:06.750485 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-10 00:53:06.750506 | orchestrator | 2026-04-10 00:53:06.750517 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-04-10 00:53:06.750527 | orchestrator | Friday 10 April 2026 00:50:15 +0000 (0:00:03.102) 0:03:06.359 ********** 2026-04-10 00:53:06.750539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-10 00:53:06.750646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-10 00:53:06.750663 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.750675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-10 00:53:06.750687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-10 00:53:06.750705 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.750797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-10 00:53:06.750814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-10 00:53:06.750824 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.750834 | orchestrator | 2026-04-10 00:53:06.750844 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-04-10 00:53:06.750854 | orchestrator | Friday 10 April 2026 00:50:17 +0000 (0:00:02.231) 0:03:08.590 ********** 2026-04-10 00:53:06.750866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-10 00:53:06.750884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-10 00:53:06.750895 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.750947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-10 00:53:06.750959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-10 00:53:06.750973 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.750982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-10 00:53:06.751050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-10 00:53:06.751063 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.751072 | orchestrator | 2026-04-10 00:53:06.751080 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-04-10 00:53:06.751088 | orchestrator | Friday 10 April 2026 00:50:20 +0000 (0:00:02.290) 0:03:10.881 ********** 2026-04-10 00:53:06.751097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-10 00:53:06.751106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-10 00:53:06.751122 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.751131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-10 00:53:06.751139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-10 00:53:06.751147 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.751156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-10 00:53:06.751220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-10 00:53:06.751233 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.751241 | orchestrator | 2026-04-10 00:53:06.751250 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-04-10 00:53:06.751258 | orchestrator | Friday 10 April 2026 00:50:22 +0000 (0:00:01.964) 0:03:12.846 ********** 2026-04-10 00:53:06.751266 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.751278 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.751294 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.751316 | orchestrator | 2026-04-10 00:53:06.751328 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-04-10 00:53:06.751342 | orchestrator | Friday 10 April 2026 00:50:24 +0000 (0:00:01.935) 0:03:14.781 ********** 2026-04-10 00:53:06.751356 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.751368 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.751380 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.751393 | orchestrator | 2026-04-10 00:53:06.751406 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-04-10 00:53:06.751442 | orchestrator | Friday 10 April 2026 00:50:25 +0000 (0:00:01.442) 0:03:16.223 ********** 2026-04-10 00:53:06.751465 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.751477 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.751490 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.751504 | orchestrator | 2026-04-10 00:53:06.751516 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-04-10 00:53:06.751529 | orchestrator | Friday 10 April 2026 00:50:25 +0000 (0:00:00.254) 0:03:16.478 ********** 2026-04-10 00:53:06.751543 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:53:06.751556 | orchestrator | 2026-04-10 00:53:06.751569 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-04-10 00:53:06.751581 | orchestrator | Friday 10 April 2026 00:50:26 +0000 (0:00:01.196) 0:03:17.675 ********** 2026-04-10 00:53:06.751595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-10 00:53:06.751612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-10 00:53:06.751627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-10 00:53:06.751639 | orchestrator | 2026-04-10 00:53:06.751652 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-04-10 00:53:06.751664 | orchestrator | Friday 10 April 2026 00:50:28 +0000 (0:00:01.526) 0:03:19.202 ********** 2026-04-10 00:53:06.751808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-10 00:53:06.751846 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.751861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-10 00:53:06.751875 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.751890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-10 00:53:06.751904 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.751920 | orchestrator | 2026-04-10 00:53:06.751934 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-04-10 00:53:06.751948 | orchestrator | Friday 10 April 2026 00:50:28 +0000 (0:00:00.382) 0:03:19.584 ********** 2026-04-10 00:53:06.751962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-10 00:53:06.751977 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.751992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-10 00:53:06.752007 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.752022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-10 00:53:06.752035 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.752047 | orchestrator | 2026-04-10 00:53:06.752060 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-04-10 00:53:06.752073 | orchestrator | Friday 10 April 2026 00:50:29 +0000 (0:00:00.900) 0:03:20.485 ********** 2026-04-10 00:53:06.752085 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.752098 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.752110 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.752123 | orchestrator | 2026-04-10 00:53:06.752136 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-04-10 00:53:06.752149 | orchestrator | Friday 10 April 2026 00:50:30 +0000 (0:00:00.366) 0:03:20.851 ********** 2026-04-10 00:53:06.752161 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.752185 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.752200 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.752213 | orchestrator | 2026-04-10 00:53:06.752226 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-04-10 00:53:06.752238 | orchestrator | Friday 10 April 2026 00:50:31 +0000 (0:00:01.115) 0:03:21.967 ********** 2026-04-10 00:53:06.752259 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.752271 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.752285 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.752401 | orchestrator | 2026-04-10 00:53:06.752448 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-04-10 00:53:06.752457 | orchestrator | Friday 10 April 2026 00:50:31 +0000 (0:00:00.249) 0:03:22.217 ********** 2026-04-10 00:53:06.752465 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:53:06.752473 | orchestrator | 2026-04-10 00:53:06.752482 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-04-10 00:53:06.752490 | orchestrator | Friday 10 April 2026 00:50:32 +0000 (0:00:01.337) 0:03:23.554 ********** 2026-04-10 00:53:06.752500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-10 00:53:06.752510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.752521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.752530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.752625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-10 00:53:06.752645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-10 00:53:06.752659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.752674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.752689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.752721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-10 00:53:06.752837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.752862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-10 00:53:06.752879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-10 00:53:06.752893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.752908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.752932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-10 00:53:06.753041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-10 00:53:06.753056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.753066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-10 00:53:06.753075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-10 00:53:06.753083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.753092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-10 00:53:06.753107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-10 00:53:06.753178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.753192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.753201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-10 00:53:06.753210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-10 00:53:06.753219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-10 00:53:06.753235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-10 00:53:06.753286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.753297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-10 00:53:06.753306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-10 00:53:06.753315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-10 00:53:06.753330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.753343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.753389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.753400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-10 00:53:06.753485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.753509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-10 00:53:06.753533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-10 00:53:06.753546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.753655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-10 00:53:06.753676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.753684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-10 00:53:06.753693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-10 00:53:06.753710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.753718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-10 00:53:06.753786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-10 00:53:06.753799 | orchestrator | 2026-04-10 00:53:06.753807 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-04-10 00:53:06.753817 | orchestrator | Friday 10 April 2026 00:50:36 +0000 (0:00:04.205) 0:03:27.760 ********** 2026-04-10 00:53:06.753826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-10 00:53:06.753835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.753849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.753858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.753927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-10 00:53:06.753940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-10 00:53:06.753948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.753964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-10 00:53:06.753972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.753980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-10 00:53:06.754085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.754098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.754106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-10 00:53:06.754119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.754126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.754133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.754181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.754191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-10 00:53:06.754203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-10 00:53:06.754211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-10 00:53:06.754218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.754228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.754267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.754276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-10 00:53:06.754284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-10 00:53:06.754297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-10 00:53:06.754304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-10 00:53:06.754312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-10 00:53:06.754319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.754375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-10 00:53:06.754385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.754393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.754405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-10 00:53:06.754435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-10 00:53:06.754461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-10 00:53:06.754523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-10 00:53:06.754534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.754548 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.754555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.754562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-10 00:53:06.754570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-10 00:53:06.754577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-10 00:53:06.754588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.754618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-10 00:53:06.754626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-10 00:53:06.754638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.754645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-10 00:53:06.754652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-10 00:53:06.754659 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.754691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-10 00:53:06.754699 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.754706 | orchestrator | 2026-04-10 00:53:06.754713 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-04-10 00:53:06.754728 | orchestrator | Friday 10 April 2026 00:50:38 +0000 (0:00:01.796) 0:03:29.556 ********** 2026-04-10 00:53:06.754735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-10 00:53:06.754743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-10 00:53:06.754750 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.754757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-10 00:53:06.754764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-10 00:53:06.754770 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.754777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-10 00:53:06.754784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-10 00:53:06.754791 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.754798 | orchestrator | 2026-04-10 00:53:06.754805 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-04-10 00:53:06.754812 | orchestrator | Friday 10 April 2026 00:50:40 +0000 (0:00:01.334) 0:03:30.890 ********** 2026-04-10 00:53:06.754819 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.754825 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.754832 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.754839 | orchestrator | 2026-04-10 00:53:06.754846 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-04-10 00:53:06.754852 | orchestrator | Friday 10 April 2026 00:50:41 +0000 (0:00:01.248) 0:03:32.138 ********** 2026-04-10 00:53:06.754859 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.754866 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.754872 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.754879 | orchestrator | 2026-04-10 00:53:06.754886 | orchestrator | TASK [include_role : placement] ************************************************ 2026-04-10 00:53:06.754892 | orchestrator | Friday 10 April 2026 00:50:43 +0000 (0:00:01.831) 0:03:33.970 ********** 2026-04-10 00:53:06.754899 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:53:06.754906 | orchestrator | 2026-04-10 00:53:06.754912 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-04-10 00:53:06.754919 | orchestrator | Friday 10 April 2026 00:50:44 +0000 (0:00:01.230) 0:03:35.200 ********** 2026-04-10 00:53:06.754926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-10 00:53:06.754964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-10 00:53:06.754973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-10 00:53:06.754980 | orchestrator | 2026-04-10 00:53:06.754987 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-04-10 00:53:06.754994 | orchestrator | Friday 10 April 2026 00:50:47 +0000 (0:00:02.904) 0:03:38.105 ********** 2026-04-10 00:53:06.755001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-10 00:53:06.755008 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.755016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-10 00:53:06.755031 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.755077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-10 00:53:06.755094 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.755106 | orchestrator | 2026-04-10 00:53:06.755117 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-04-10 00:53:06.755128 | orchestrator | Friday 10 April 2026 00:50:47 +0000 (0:00:00.400) 0:03:38.505 ********** 2026-04-10 00:53:06.755138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-10 00:53:06.755150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-10 00:53:06.755163 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.755174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-10 00:53:06.755186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-10 00:53:06.755197 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.755209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-10 00:53:06.755220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-10 00:53:06.755232 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.755243 | orchestrator | 2026-04-10 00:53:06.755255 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-04-10 00:53:06.755266 | orchestrator | Friday 10 April 2026 00:50:48 +0000 (0:00:01.014) 0:03:39.519 ********** 2026-04-10 00:53:06.755278 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.755289 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.755300 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.755312 | orchestrator | 2026-04-10 00:53:06.755323 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-04-10 00:53:06.755335 | orchestrator | Friday 10 April 2026 00:50:50 +0000 (0:00:01.289) 0:03:40.809 ********** 2026-04-10 00:53:06.755345 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.755354 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.755362 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.755369 | orchestrator | 2026-04-10 00:53:06.755378 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-04-10 00:53:06.755395 | orchestrator | Friday 10 April 2026 00:50:52 +0000 (0:00:02.036) 0:03:42.846 ********** 2026-04-10 00:53:06.755403 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:53:06.755431 | orchestrator | 2026-04-10 00:53:06.755439 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-04-10 00:53:06.755446 | orchestrator | Friday 10 April 2026 00:50:53 +0000 (0:00:01.377) 0:03:44.223 ********** 2026-04-10 00:53:06.755461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-10 00:53:06.755507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.755517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.755526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-10 00:53:06.755540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-10 00:53:06.755570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.755579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.755587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.755594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.755601 | orchestrator | 2026-04-10 00:53:06.755607 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-04-10 00:53:06.755615 | orchestrator | Friday 10 April 2026 00:50:57 +0000 (0:00:03.912) 0:03:48.135 ********** 2026-04-10 00:53:06.755627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-10 00:53:06.755635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.755664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.755672 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.755680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-10 00:53:06.755687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.755767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.755791 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.755802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-10 00:53:06.755837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.755845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.755852 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.755859 | orchestrator | 2026-04-10 00:53:06.755866 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-04-10 00:53:06.755874 | orchestrator | Friday 10 April 2026 00:50:57 +0000 (0:00:00.516) 0:03:48.652 ********** 2026-04-10 00:53:06.755881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-10 00:53:06.755895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-10 00:53:06.755903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-10 00:53:06.755910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-10 00:53:06.755917 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.755924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-10 00:53:06.755930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-10 00:53:06.755937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-10 00:53:06.755944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-10 00:53:06.755950 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.755957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-10 00:53:06.755964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-10 00:53:06.755971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-10 00:53:06.755981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-10 00:53:06.756008 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.756016 | orchestrator | 2026-04-10 00:53:06.756023 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-04-10 00:53:06.756030 | orchestrator | Friday 10 April 2026 00:50:58 +0000 (0:00:00.754) 0:03:49.407 ********** 2026-04-10 00:53:06.756037 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.756044 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.756050 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.756057 | orchestrator | 2026-04-10 00:53:06.756064 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-04-10 00:53:06.756070 | orchestrator | Friday 10 April 2026 00:51:00 +0000 (0:00:01.573) 0:03:50.981 ********** 2026-04-10 00:53:06.756077 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.756084 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.756091 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.756097 | orchestrator | 2026-04-10 00:53:06.756104 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-04-10 00:53:06.756116 | orchestrator | Friday 10 April 2026 00:51:02 +0000 (0:00:01.973) 0:03:52.955 ********** 2026-04-10 00:53:06.756123 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:53:06.756129 | orchestrator | 2026-04-10 00:53:06.756136 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-04-10 00:53:06.756143 | orchestrator | Friday 10 April 2026 00:51:03 +0000 (0:00:01.167) 0:03:54.123 ********** 2026-04-10 00:53:06.756150 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-04-10 00:53:06.756157 | orchestrator | 2026-04-10 00:53:06.756163 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-04-10 00:53:06.756170 | orchestrator | Friday 10 April 2026 00:51:04 +0000 (0:00:01.096) 0:03:55.219 ********** 2026-04-10 00:53:06.756177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-10 00:53:06.756185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-10 00:53:06.756192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-10 00:53:06.756199 | orchestrator | 2026-04-10 00:53:06.756206 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-04-10 00:53:06.756213 | orchestrator | Friday 10 April 2026 00:51:07 +0000 (0:00:03.415) 0:03:58.635 ********** 2026-04-10 00:53:06.756220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-10 00:53:06.756227 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.756238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-10 00:53:06.756266 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.756274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-10 00:53:06.756286 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.756293 | orchestrator | 2026-04-10 00:53:06.756300 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-04-10 00:53:06.756306 | orchestrator | Friday 10 April 2026 00:51:09 +0000 (0:00:01.159) 0:03:59.795 ********** 2026-04-10 00:53:06.756313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-10 00:53:06.756320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-10 00:53:06.756328 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.756335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-10 00:53:06.756342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-10 00:53:06.756349 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.756356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-10 00:53:06.756363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-10 00:53:06.756370 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.756377 | orchestrator | 2026-04-10 00:53:06.756383 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-10 00:53:06.756390 | orchestrator | Friday 10 April 2026 00:51:10 +0000 (0:00:01.607) 0:04:01.402 ********** 2026-04-10 00:53:06.756396 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.756403 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.756462 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.756470 | orchestrator | 2026-04-10 00:53:06.756477 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-10 00:53:06.756484 | orchestrator | Friday 10 April 2026 00:51:12 +0000 (0:00:02.277) 0:04:03.680 ********** 2026-04-10 00:53:06.756491 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.756497 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.756504 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.756511 | orchestrator | 2026-04-10 00:53:06.756518 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-04-10 00:53:06.756524 | orchestrator | Friday 10 April 2026 00:51:15 +0000 (0:00:02.997) 0:04:06.677 ********** 2026-04-10 00:53:06.756532 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-04-10 00:53:06.756538 | orchestrator | 2026-04-10 00:53:06.756545 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-04-10 00:53:06.756558 | orchestrator | Friday 10 April 2026 00:51:16 +0000 (0:00:00.847) 0:04:07.524 ********** 2026-04-10 00:53:06.756565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-10 00:53:06.756572 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.756608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-10 00:53:06.756616 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.756624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-10 00:53:06.756631 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.756637 | orchestrator | 2026-04-10 00:53:06.756644 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-04-10 00:53:06.756651 | orchestrator | Friday 10 April 2026 00:51:18 +0000 (0:00:01.303) 0:04:08.828 ********** 2026-04-10 00:53:06.756658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-10 00:53:06.756665 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.756672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-10 00:53:06.756679 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.756686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-10 00:53:06.756701 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.756708 | orchestrator | 2026-04-10 00:53:06.756715 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-04-10 00:53:06.756721 | orchestrator | Friday 10 April 2026 00:51:19 +0000 (0:00:01.299) 0:04:10.128 ********** 2026-04-10 00:53:06.756728 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.756735 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.756741 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.756748 | orchestrator | 2026-04-10 00:53:06.756755 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-10 00:53:06.756761 | orchestrator | Friday 10 April 2026 00:51:20 +0000 (0:00:01.128) 0:04:11.256 ********** 2026-04-10 00:53:06.756768 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:53:06.756775 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:53:06.756782 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:53:06.756788 | orchestrator | 2026-04-10 00:53:06.756795 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-10 00:53:06.756802 | orchestrator | Friday 10 April 2026 00:51:22 +0000 (0:00:02.232) 0:04:13.489 ********** 2026-04-10 00:53:06.756808 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:53:06.756815 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:53:06.756821 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:53:06.756828 | orchestrator | 2026-04-10 00:53:06.756834 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-04-10 00:53:06.756841 | orchestrator | Friday 10 April 2026 00:51:25 +0000 (0:00:02.894) 0:04:16.383 ********** 2026-04-10 00:53:06.756851 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-04-10 00:53:06.756858 | orchestrator | 2026-04-10 00:53:06.756886 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-04-10 00:53:06.756894 | orchestrator | Friday 10 April 2026 00:51:26 +0000 (0:00:00.774) 0:04:17.158 ********** 2026-04-10 00:53:06.756901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-10 00:53:06.756908 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.756915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-10 00:53:06.756922 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.756929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-10 00:53:06.756936 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.756943 | orchestrator | 2026-04-10 00:53:06.756950 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-04-10 00:53:06.756970 | orchestrator | Friday 10 April 2026 00:51:27 +0000 (0:00:01.199) 0:04:18.357 ********** 2026-04-10 00:53:06.756980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-10 00:53:06.756991 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.757001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-10 00:53:06.757013 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.757023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-10 00:53:06.757034 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.757045 | orchestrator | 2026-04-10 00:53:06.757055 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-04-10 00:53:06.757067 | orchestrator | Friday 10 April 2026 00:51:28 +0000 (0:00:01.087) 0:04:19.445 ********** 2026-04-10 00:53:06.757073 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.757080 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.757086 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.757092 | orchestrator | 2026-04-10 00:53:06.757102 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-10 00:53:06.757131 | orchestrator | Friday 10 April 2026 00:51:30 +0000 (0:00:01.340) 0:04:20.785 ********** 2026-04-10 00:53:06.757139 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:53:06.757145 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:53:06.757152 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:53:06.757158 | orchestrator | 2026-04-10 00:53:06.757164 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-10 00:53:06.757170 | orchestrator | Friday 10 April 2026 00:51:32 +0000 (0:00:02.467) 0:04:23.253 ********** 2026-04-10 00:53:06.757177 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:53:06.757183 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:53:06.757189 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:53:06.757195 | orchestrator | 2026-04-10 00:53:06.757201 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-04-10 00:53:06.757208 | orchestrator | Friday 10 April 2026 00:51:35 +0000 (0:00:02.810) 0:04:26.064 ********** 2026-04-10 00:53:06.757214 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:53:06.757220 | orchestrator | 2026-04-10 00:53:06.757226 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-04-10 00:53:06.757233 | orchestrator | Friday 10 April 2026 00:51:36 +0000 (0:00:01.217) 0:04:27.281 ********** 2026-04-10 00:53:06.757240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-10 00:53:06.757254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-10 00:53:06.757262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-10 00:53:06.757269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-10 00:53:06.757282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.757311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-10 00:53:06.757323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-10 00:53:06.757330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-10 00:53:06.757337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-10 00:53:06.757343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.757370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-10 00:53:06.757378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-10 00:53:06.757385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-10 00:53:06.757396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-10 00:53:06.757403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.757427 | orchestrator | 2026-04-10 00:53:06.757434 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-04-10 00:53:06.757441 | orchestrator | Friday 10 April 2026 00:51:40 +0000 (0:00:03.600) 0:04:30.882 ********** 2026-04-10 00:53:06.757447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-10 00:53:06.757457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-10 00:53:06.757483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-10 00:53:06.757495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-10 00:53:06.757502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.757508 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.757515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-10 00:53:06.757522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-10 00:53:06.757528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-10 00:53:06.757556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-10 00:53:06.757569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.757575 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.757582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-10 00:53:06.757589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-10 00:53:06.757596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-10 00:53:06.757602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-10 00:53:06.757632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-10 00:53:06.757648 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.757659 | orchestrator | 2026-04-10 00:53:06.757669 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-04-10 00:53:06.757682 | orchestrator | Friday 10 April 2026 00:51:41 +0000 (0:00:01.101) 0:04:31.984 ********** 2026-04-10 00:53:06.757698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-10 00:53:06.757709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-10 00:53:06.757719 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.757729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-10 00:53:06.757739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-10 00:53:06.757750 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.757760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-10 00:53:06.757769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-10 00:53:06.757779 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.757789 | orchestrator | 2026-04-10 00:53:06.757799 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-04-10 00:53:06.757809 | orchestrator | Friday 10 April 2026 00:51:42 +0000 (0:00:00.963) 0:04:32.947 ********** 2026-04-10 00:53:06.757818 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.757828 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.757838 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.757848 | orchestrator | 2026-04-10 00:53:06.757858 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-04-10 00:53:06.757868 | orchestrator | Friday 10 April 2026 00:51:43 +0000 (0:00:01.342) 0:04:34.290 ********** 2026-04-10 00:53:06.757878 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.757888 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.757898 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.757907 | orchestrator | 2026-04-10 00:53:06.757914 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-04-10 00:53:06.757920 | orchestrator | Friday 10 April 2026 00:51:45 +0000 (0:00:02.418) 0:04:36.709 ********** 2026-04-10 00:53:06.757926 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:53:06.757933 | orchestrator | 2026-04-10 00:53:06.757939 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-04-10 00:53:06.757945 | orchestrator | Friday 10 April 2026 00:51:47 +0000 (0:00:01.831) 0:04:38.540 ********** 2026-04-10 00:53:06.757953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-10 00:53:06.758061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-10 00:53:06.758079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-10 00:53:06.758091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-10 00:53:06.758103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-10 00:53:06.758162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-10 00:53:06.758173 | orchestrator | 2026-04-10 00:53:06.758180 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-04-10 00:53:06.758186 | orchestrator | Friday 10 April 2026 00:51:53 +0000 (0:00:05.246) 0:04:43.786 ********** 2026-04-10 00:53:06.758193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-10 00:53:06.758200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-10 00:53:06.758207 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.758213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-10 00:53:06.758251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-10 00:53:06.758259 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.758266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-10 00:53:06.758273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-10 00:53:06.758280 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.758286 | orchestrator | 2026-04-10 00:53:06.758292 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-04-10 00:53:06.758299 | orchestrator | Friday 10 April 2026 00:51:53 +0000 (0:00:00.804) 0:04:44.591 ********** 2026-04-10 00:53:06.758305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-10 00:53:06.758312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-10 00:53:06.758325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-10 00:53:06.758338 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.758347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-10 00:53:06.758358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-10 00:53:06.758368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-10 00:53:06.758378 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.758389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-10 00:53:06.758465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-10 00:53:06.758482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-10 00:53:06.758493 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.758504 | orchestrator | 2026-04-10 00:53:06.758513 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-04-10 00:53:06.758524 | orchestrator | Friday 10 April 2026 00:51:54 +0000 (0:00:01.128) 0:04:45.720 ********** 2026-04-10 00:53:06.758533 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.758539 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.758546 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.758552 | orchestrator | 2026-04-10 00:53:06.758558 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-04-10 00:53:06.758564 | orchestrator | Friday 10 April 2026 00:51:55 +0000 (0:00:00.413) 0:04:46.133 ********** 2026-04-10 00:53:06.758570 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.758576 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.758583 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.758589 | orchestrator | 2026-04-10 00:53:06.758595 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-04-10 00:53:06.758601 | orchestrator | Friday 10 April 2026 00:51:56 +0000 (0:00:01.102) 0:04:47.236 ********** 2026-04-10 00:53:06.758607 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:53:06.758613 | orchestrator | 2026-04-10 00:53:06.758620 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-04-10 00:53:06.758626 | orchestrator | Friday 10 April 2026 00:51:57 +0000 (0:00:01.471) 0:04:48.707 ********** 2026-04-10 00:53:06.758632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-10 00:53:06.758647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-10 00:53:06.758656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:53:06.758663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:53:06.758705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-10 00:53:06.758719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-10 00:53:06.758729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-10 00:53:06.758739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:53:06.758756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:53:06.758767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-10 00:53:06.758778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-10 00:53:06.758828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-10 00:53:06.758837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:53:06.758845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:53:06.758856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-10 00:53:06.758875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-10 00:53:06.758888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-10 00:53:06.758900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:53:06.758924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-10 00:53:06.758937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:53:06.758956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-10 00:53:06.758967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-10 00:53:06.758978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:53:06.758989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:53:06.759014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-10 00:53:06.759027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-10 00:53:06.759046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-10 00:53:06.759057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:53:06.759068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:53:06.759079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-10 00:53:06.759088 | orchestrator | 2026-04-10 00:53:06.759098 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-04-10 00:53:06.759107 | orchestrator | Friday 10 April 2026 00:52:01 +0000 (0:00:03.999) 0:04:52.707 ********** 2026-04-10 00:53:06.759128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-10 00:53:06.759139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-10 00:53:06.759159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:53:06.759169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:53:06.759179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-10 00:53:06.759190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-10 00:53:06.759211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-10 00:53:06.759223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-10 00:53:06.759245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:53:06.759256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-10 00:53:06.759267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:53:06.759274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:53:06.759281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-10 00:53:06.759287 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.759294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:53:06.759309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-10 00:53:06.759325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-10 00:53:06.759332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-10 00:53:06.759339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:53:06.759345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-10 00:53:06.759352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:53:06.759366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-10 00:53:06.759378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-10 00:53:06.759385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:53:06.759391 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.759398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:53:06.759404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-10 00:53:06.759437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-10 00:53:06.759453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-10 00:53:06.759469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:53:06.759476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 00:53:06.759482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-10 00:53:06.759489 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.759495 | orchestrator | 2026-04-10 00:53:06.759501 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-04-10 00:53:06.759508 | orchestrator | Friday 10 April 2026 00:52:02 +0000 (0:00:00.773) 0:04:53.481 ********** 2026-04-10 00:53:06.759514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-10 00:53:06.759521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-10 00:53:06.759529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-10 00:53:06.759537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-10 00:53:06.759544 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.759551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-10 00:53:06.759557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-10 00:53:06.759564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-10 00:53:06.759579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-10 00:53:06.759585 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.759596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-10 00:53:06.759602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-10 00:53:06.759609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-10 00:53:06.759616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-10 00:53:06.759622 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.759628 | orchestrator | 2026-04-10 00:53:06.759635 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-04-10 00:53:06.759641 | orchestrator | Friday 10 April 2026 00:52:03 +0000 (0:00:01.114) 0:04:54.595 ********** 2026-04-10 00:53:06.759647 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.759653 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.759660 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.759666 | orchestrator | 2026-04-10 00:53:06.759672 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-04-10 00:53:06.759678 | orchestrator | Friday 10 April 2026 00:52:04 +0000 (0:00:00.448) 0:04:55.044 ********** 2026-04-10 00:53:06.759684 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.759691 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.759697 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.759703 | orchestrator | 2026-04-10 00:53:06.759709 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-04-10 00:53:06.759715 | orchestrator | Friday 10 April 2026 00:52:05 +0000 (0:00:01.139) 0:04:56.183 ********** 2026-04-10 00:53:06.759722 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:53:06.759728 | orchestrator | 2026-04-10 00:53:06.759734 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-04-10 00:53:06.759740 | orchestrator | Friday 10 April 2026 00:52:06 +0000 (0:00:01.300) 0:04:57.484 ********** 2026-04-10 00:53:06.759747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-10 00:53:06.759758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-10 00:53:06.759773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-10 00:53:06.759781 | orchestrator | 2026-04-10 00:53:06.759787 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-04-10 00:53:06.759793 | orchestrator | Friday 10 April 2026 00:52:09 +0000 (0:00:02.508) 0:04:59.992 ********** 2026-04-10 00:53:06.759800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-10 00:53:06.759807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-10 00:53:06.759818 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.759825 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.759834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-10 00:53:06.759841 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.759847 | orchestrator | 2026-04-10 00:53:06.759857 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-04-10 00:53:06.759864 | orchestrator | Friday 10 April 2026 00:52:09 +0000 (0:00:00.369) 0:05:00.362 ********** 2026-04-10 00:53:06.759870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-10 00:53:06.759877 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.759883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-10 00:53:06.759889 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.759896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-10 00:53:06.759902 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.759908 | orchestrator | 2026-04-10 00:53:06.759914 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-04-10 00:53:06.759920 | orchestrator | Friday 10 April 2026 00:52:10 +0000 (0:00:00.586) 0:05:00.949 ********** 2026-04-10 00:53:06.759927 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.759933 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.759939 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.759945 | orchestrator | 2026-04-10 00:53:06.759952 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-04-10 00:53:06.759958 | orchestrator | Friday 10 April 2026 00:52:10 +0000 (0:00:00.743) 0:05:01.692 ********** 2026-04-10 00:53:06.759964 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.759970 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.759977 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.759983 | orchestrator | 2026-04-10 00:53:06.759989 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-04-10 00:53:06.759995 | orchestrator | Friday 10 April 2026 00:52:12 +0000 (0:00:01.191) 0:05:02.883 ********** 2026-04-10 00:53:06.760001 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:53:06.760008 | orchestrator | 2026-04-10 00:53:06.760014 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-04-10 00:53:06.760025 | orchestrator | Friday 10 April 2026 00:52:13 +0000 (0:00:01.373) 0:05:04.257 ********** 2026-04-10 00:53:06.760032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-10 00:53:06.760039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-10 00:53:06.760054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-10 00:53:06.760062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-10 00:53:06.760069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-10 00:53:06.760083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-10 00:53:06.760089 | orchestrator | 2026-04-10 00:53:06.760095 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-04-10 00:53:06.760101 | orchestrator | Friday 10 April 2026 00:52:19 +0000 (0:00:05.709) 0:05:09.967 ********** 2026-04-10 00:53:06.760114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-10 00:53:06.760121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-10 00:53:06.760128 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.760135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-10 00:53:06.760145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-10 00:53:06.760152 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.760158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-10 00:53:06.760169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-10 00:53:06.760176 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.760182 | orchestrator | 2026-04-10 00:53:06.760189 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-04-10 00:53:06.760195 | orchestrator | Friday 10 April 2026 00:52:20 +0000 (0:00:00.822) 0:05:10.789 ********** 2026-04-10 00:53:06.760206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-10 00:53:06.760212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-10 00:53:06.760219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-10 00:53:06.760225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-10 00:53:06.760231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-10 00:53:06.760296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-10 00:53:06.760315 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.760321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-10 00:53:06.760328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-10 00:53:06.760335 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.760341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-10 00:53:06.760347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-10 00:53:06.760354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-10 00:53:06.760360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-10 00:53:06.760367 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.760373 | orchestrator | 2026-04-10 00:53:06.760379 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-04-10 00:53:06.760386 | orchestrator | Friday 10 April 2026 00:52:20 +0000 (0:00:00.847) 0:05:11.637 ********** 2026-04-10 00:53:06.760392 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.760402 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.760423 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.760430 | orchestrator | 2026-04-10 00:53:06.760441 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-04-10 00:53:06.760448 | orchestrator | Friday 10 April 2026 00:52:22 +0000 (0:00:01.189) 0:05:12.826 ********** 2026-04-10 00:53:06.760454 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.760460 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.760471 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.760478 | orchestrator | 2026-04-10 00:53:06.760484 | orchestrator | TASK [include_role : swift] **************************************************** 2026-04-10 00:53:06.760490 | orchestrator | Friday 10 April 2026 00:52:24 +0000 (0:00:02.078) 0:05:14.905 ********** 2026-04-10 00:53:06.760496 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.760503 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.760509 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.760515 | orchestrator | 2026-04-10 00:53:06.760521 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-04-10 00:53:06.760528 | orchestrator | Friday 10 April 2026 00:52:24 +0000 (0:00:00.467) 0:05:15.373 ********** 2026-04-10 00:53:06.760534 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.760540 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.760546 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.760552 | orchestrator | 2026-04-10 00:53:06.760559 | orchestrator | TASK [include_role : trove] **************************************************** 2026-04-10 00:53:06.760565 | orchestrator | Friday 10 April 2026 00:52:24 +0000 (0:00:00.272) 0:05:15.645 ********** 2026-04-10 00:53:06.760571 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.760577 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.760583 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.760589 | orchestrator | 2026-04-10 00:53:06.760596 | orchestrator | TASK [include_role : venus] **************************************************** 2026-04-10 00:53:06.760602 | orchestrator | Friday 10 April 2026 00:52:25 +0000 (0:00:00.263) 0:05:15.908 ********** 2026-04-10 00:53:06.760608 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.760614 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.760621 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.760627 | orchestrator | 2026-04-10 00:53:06.760633 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-04-10 00:53:06.760639 | orchestrator | Friday 10 April 2026 00:52:25 +0000 (0:00:00.262) 0:05:16.171 ********** 2026-04-10 00:53:06.760645 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.760652 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.760658 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.760664 | orchestrator | 2026-04-10 00:53:06.760670 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-04-10 00:53:06.760676 | orchestrator | Friday 10 April 2026 00:52:25 +0000 (0:00:00.462) 0:05:16.633 ********** 2026-04-10 00:53:06.760682 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.760688 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.760695 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.760701 | orchestrator | 2026-04-10 00:53:06.760707 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-04-10 00:53:06.760713 | orchestrator | Friday 10 April 2026 00:52:26 +0000 (0:00:00.463) 0:05:17.096 ********** 2026-04-10 00:53:06.760719 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:53:06.760726 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:53:06.760733 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:53:06.760739 | orchestrator | 2026-04-10 00:53:06.760745 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-04-10 00:53:06.760751 | orchestrator | Friday 10 April 2026 00:52:26 +0000 (0:00:00.639) 0:05:17.736 ********** 2026-04-10 00:53:06.760758 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:53:06.760764 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:53:06.760770 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:53:06.760776 | orchestrator | 2026-04-10 00:53:06.760783 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-04-10 00:53:06.760789 | orchestrator | Friday 10 April 2026 00:52:27 +0000 (0:00:00.555) 0:05:18.291 ********** 2026-04-10 00:53:06.760795 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:53:06.760801 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:53:06.760807 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:53:06.760818 | orchestrator | 2026-04-10 00:53:06.760824 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-04-10 00:53:06.760831 | orchestrator | Friday 10 April 2026 00:52:28 +0000 (0:00:00.910) 0:05:19.202 ********** 2026-04-10 00:53:06.760837 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:53:06.760843 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:53:06.760849 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:53:06.760855 | orchestrator | 2026-04-10 00:53:06.760862 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-04-10 00:53:06.760868 | orchestrator | Friday 10 April 2026 00:52:29 +0000 (0:00:00.894) 0:05:20.096 ********** 2026-04-10 00:53:06.760874 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:53:06.760880 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:53:06.760886 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:53:06.760892 | orchestrator | 2026-04-10 00:53:06.760899 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-04-10 00:53:06.760905 | orchestrator | Friday 10 April 2026 00:52:30 +0000 (0:00:00.973) 0:05:21.069 ********** 2026-04-10 00:53:06.760911 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.760917 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.760923 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.760929 | orchestrator | 2026-04-10 00:53:06.760936 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-04-10 00:53:06.760942 | orchestrator | Friday 10 April 2026 00:52:34 +0000 (0:00:04.561) 0:05:25.631 ********** 2026-04-10 00:53:06.760948 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:53:06.760954 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:53:06.760960 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:53:06.760966 | orchestrator | 2026-04-10 00:53:06.760973 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-04-10 00:53:06.760979 | orchestrator | Friday 10 April 2026 00:52:38 +0000 (0:00:03.249) 0:05:28.881 ********** 2026-04-10 00:53:06.760985 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.760991 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.760998 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.761004 | orchestrator | 2026-04-10 00:53:06.761013 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-04-10 00:53:06.761023 | orchestrator | Friday 10 April 2026 00:52:47 +0000 (0:00:09.335) 0:05:38.216 ********** 2026-04-10 00:53:06.761031 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:53:06.761042 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:53:06.761051 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:53:06.761061 | orchestrator | 2026-04-10 00:53:06.761070 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-04-10 00:53:06.761079 | orchestrator | Friday 10 April 2026 00:52:52 +0000 (0:00:04.739) 0:05:42.956 ********** 2026-04-10 00:53:06.761089 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:53:06.761099 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:53:06.761108 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:53:06.761118 | orchestrator | 2026-04-10 00:53:06.761127 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-04-10 00:53:06.761136 | orchestrator | Friday 10 April 2026 00:53:01 +0000 (0:00:09.103) 0:05:52.059 ********** 2026-04-10 00:53:06.761145 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.761156 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.761165 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.761175 | orchestrator | 2026-04-10 00:53:06.761185 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-04-10 00:53:06.761196 | orchestrator | Friday 10 April 2026 00:53:01 +0000 (0:00:00.539) 0:05:52.598 ********** 2026-04-10 00:53:06.761206 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.761216 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.761227 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.761238 | orchestrator | 2026-04-10 00:53:06.761248 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-04-10 00:53:06.761265 | orchestrator | Friday 10 April 2026 00:53:02 +0000 (0:00:00.288) 0:05:52.886 ********** 2026-04-10 00:53:06.761275 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.761286 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.761297 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.761308 | orchestrator | 2026-04-10 00:53:06.761318 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-04-10 00:53:06.761328 | orchestrator | Friday 10 April 2026 00:53:02 +0000 (0:00:00.313) 0:05:53.200 ********** 2026-04-10 00:53:06.761337 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.761345 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.761356 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.761365 | orchestrator | 2026-04-10 00:53:06.761375 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-04-10 00:53:06.761386 | orchestrator | Friday 10 April 2026 00:53:02 +0000 (0:00:00.320) 0:05:53.521 ********** 2026-04-10 00:53:06.761396 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.761406 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.761467 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.761479 | orchestrator | 2026-04-10 00:53:06.761489 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-04-10 00:53:06.761499 | orchestrator | Friday 10 April 2026 00:53:03 +0000 (0:00:00.546) 0:05:54.068 ********** 2026-04-10 00:53:06.761510 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:53:06.761521 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:53:06.761531 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:53:06.761542 | orchestrator | 2026-04-10 00:53:06.761552 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-04-10 00:53:06.761562 | orchestrator | Friday 10 April 2026 00:53:03 +0000 (0:00:00.310) 0:05:54.378 ********** 2026-04-10 00:53:06.761574 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:53:06.761584 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:53:06.761595 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:53:06.761605 | orchestrator | 2026-04-10 00:53:06.761616 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-04-10 00:53:06.761627 | orchestrator | Friday 10 April 2026 00:53:04 +0000 (0:00:00.846) 0:05:55.225 ********** 2026-04-10 00:53:06.761637 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:53:06.761648 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:53:06.761658 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:53:06.761668 | orchestrator | 2026-04-10 00:53:06.761679 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:53:06.761689 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-10 00:53:06.761701 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-10 00:53:06.761712 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-10 00:53:06.761722 | orchestrator | 2026-04-10 00:53:06.761733 | orchestrator | 2026-04-10 00:53:06.761743 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:53:06.761754 | orchestrator | Friday 10 April 2026 00:53:05 +0000 (0:00:00.900) 0:05:56.126 ********** 2026-04-10 00:53:06.761765 | orchestrator | =============================================================================== 2026-04-10 00:53:06.761775 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 9.34s 2026-04-10 00:53:06.761785 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.10s 2026-04-10 00:53:06.761795 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.95s 2026-04-10 00:53:06.761806 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.71s 2026-04-10 00:53:06.761824 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 5.36s 2026-04-10 00:53:06.761835 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 5.33s 2026-04-10 00:53:06.761846 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.25s 2026-04-10 00:53:06.761862 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.16s 2026-04-10 00:53:06.761879 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.74s 2026-04-10 00:53:06.761889 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.56s 2026-04-10 00:53:06.761900 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 4.27s 2026-04-10 00:53:06.761911 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.23s 2026-04-10 00:53:06.761921 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.21s 2026-04-10 00:53:06.761932 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.00s 2026-04-10 00:53:06.761943 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 3.91s 2026-04-10 00:53:06.761953 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.84s 2026-04-10 00:53:06.761962 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 3.79s 2026-04-10 00:53:06.761971 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.79s 2026-04-10 00:53:06.761980 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 3.70s 2026-04-10 00:53:06.761989 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.60s 2026-04-10 00:53:06.761998 | orchestrator | 2026-04-10 00:53:06 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:53:09.780913 | orchestrator | 2026-04-10 00:53:09 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:53:09.782902 | orchestrator | 2026-04-10 00:53:09 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:53:09.784879 | orchestrator | 2026-04-10 00:53:09 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:53:09.784972 | orchestrator | 2026-04-10 00:53:09 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:53:12.832048 | orchestrator | 2026-04-10 00:53:12 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:53:12.833631 | orchestrator | 2026-04-10 00:53:12 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:53:12.835639 | orchestrator | 2026-04-10 00:53:12 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:53:12.835690 | orchestrator | 2026-04-10 00:53:12 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:53:15.870090 | orchestrator | 2026-04-10 00:53:15 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:53:15.871618 | orchestrator | 2026-04-10 00:53:15 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:53:15.872630 | orchestrator | 2026-04-10 00:53:15 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:53:15.872706 | orchestrator | 2026-04-10 00:53:15 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:53:18.909111 | orchestrator | 2026-04-10 00:53:18 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:53:18.909782 | orchestrator | 2026-04-10 00:53:18 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:53:18.911346 | orchestrator | 2026-04-10 00:53:18 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:53:18.911423 | orchestrator | 2026-04-10 00:53:18 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:53:21.951087 | orchestrator | 2026-04-10 00:53:21 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:53:21.951553 | orchestrator | 2026-04-10 00:53:21 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:53:21.952545 | orchestrator | 2026-04-10 00:53:21 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:53:21.952587 | orchestrator | 2026-04-10 00:53:21 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:53:24.990459 | orchestrator | 2026-04-10 00:53:24 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:53:24.993072 | orchestrator | 2026-04-10 00:53:24 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:53:24.993143 | orchestrator | 2026-04-10 00:53:24 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:53:24.993158 | orchestrator | 2026-04-10 00:53:24 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:53:28.026206 | orchestrator | 2026-04-10 00:53:28 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:53:28.027894 | orchestrator | 2026-04-10 00:53:28 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:53:28.027978 | orchestrator | 2026-04-10 00:53:28 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:53:28.027990 | orchestrator | 2026-04-10 00:53:28 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:53:31.069852 | orchestrator | 2026-04-10 00:53:31 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:53:31.069959 | orchestrator | 2026-04-10 00:53:31 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:53:31.069971 | orchestrator | 2026-04-10 00:53:31 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:53:31.069978 | orchestrator | 2026-04-10 00:53:31 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:53:34.092935 | orchestrator | 2026-04-10 00:53:34 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:53:34.094427 | orchestrator | 2026-04-10 00:53:34 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:53:34.097257 | orchestrator | 2026-04-10 00:53:34 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:53:34.097337 | orchestrator | 2026-04-10 00:53:34 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:53:37.214874 | orchestrator | 2026-04-10 00:53:37 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:53:37.215242 | orchestrator | 2026-04-10 00:53:37 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:53:37.217620 | orchestrator | 2026-04-10 00:53:37 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:53:37.220916 | orchestrator | 2026-04-10 00:53:37 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:53:40.279528 | orchestrator | 2026-04-10 00:53:40 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:53:40.286381 | orchestrator | 2026-04-10 00:53:40 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:53:40.286441 | orchestrator | 2026-04-10 00:53:40 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:53:40.286450 | orchestrator | 2026-04-10 00:53:40 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:53:43.330397 | orchestrator | 2026-04-10 00:53:43 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:53:43.332044 | orchestrator | 2026-04-10 00:53:43 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:53:43.335841 | orchestrator | 2026-04-10 00:53:43 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:53:43.335887 | orchestrator | 2026-04-10 00:53:43 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:53:46.376553 | orchestrator | 2026-04-10 00:53:46 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:53:46.378289 | orchestrator | 2026-04-10 00:53:46 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:53:46.415615 | orchestrator | 2026-04-10 00:53:46 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:53:46.415662 | orchestrator | 2026-04-10 00:53:46 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:53:49.428219 | orchestrator | 2026-04-10 00:53:49 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:53:49.430323 | orchestrator | 2026-04-10 00:53:49 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:53:49.432697 | orchestrator | 2026-04-10 00:53:49 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:53:49.432733 | orchestrator | 2026-04-10 00:53:49 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:53:52.476783 | orchestrator | 2026-04-10 00:53:52 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:53:52.480593 | orchestrator | 2026-04-10 00:53:52 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:53:52.483477 | orchestrator | 2026-04-10 00:53:52 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:53:52.483529 | orchestrator | 2026-04-10 00:53:52 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:53:55.517511 | orchestrator | 2026-04-10 00:53:55 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:53:55.520859 | orchestrator | 2026-04-10 00:53:55 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:53:55.520947 | orchestrator | 2026-04-10 00:53:55 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:53:55.520959 | orchestrator | 2026-04-10 00:53:55 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:53:58.569009 | orchestrator | 2026-04-10 00:53:58 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:53:58.570957 | orchestrator | 2026-04-10 00:53:58 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:53:58.572751 | orchestrator | 2026-04-10 00:53:58 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:53:58.572768 | orchestrator | 2026-04-10 00:53:58 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:54:01.622764 | orchestrator | 2026-04-10 00:54:01 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:54:01.627407 | orchestrator | 2026-04-10 00:54:01 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:54:01.627914 | orchestrator | 2026-04-10 00:54:01 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:54:01.628148 | orchestrator | 2026-04-10 00:54:01 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:54:04.669279 | orchestrator | 2026-04-10 00:54:04 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:54:04.672169 | orchestrator | 2026-04-10 00:54:04 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:54:04.673744 | orchestrator | 2026-04-10 00:54:04 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:54:04.673810 | orchestrator | 2026-04-10 00:54:04 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:54:07.714110 | orchestrator | 2026-04-10 00:54:07 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:54:07.715565 | orchestrator | 2026-04-10 00:54:07 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:54:07.716451 | orchestrator | 2026-04-10 00:54:07 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:54:07.716491 | orchestrator | 2026-04-10 00:54:07 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:54:10.751654 | orchestrator | 2026-04-10 00:54:10 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:54:10.753004 | orchestrator | 2026-04-10 00:54:10 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:54:10.755677 | orchestrator | 2026-04-10 00:54:10 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:54:10.755732 | orchestrator | 2026-04-10 00:54:10 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:54:13.793609 | orchestrator | 2026-04-10 00:54:13 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:54:13.795626 | orchestrator | 2026-04-10 00:54:13 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:54:13.796720 | orchestrator | 2026-04-10 00:54:13 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:54:13.796768 | orchestrator | 2026-04-10 00:54:13 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:54:16.835442 | orchestrator | 2026-04-10 00:54:16 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:54:16.837969 | orchestrator | 2026-04-10 00:54:16 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:54:16.840225 | orchestrator | 2026-04-10 00:54:16 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:54:16.840353 | orchestrator | 2026-04-10 00:54:16 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:54:19.880545 | orchestrator | 2026-04-10 00:54:19 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:54:19.882849 | orchestrator | 2026-04-10 00:54:19 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:54:19.886075 | orchestrator | 2026-04-10 00:54:19 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:54:19.886199 | orchestrator | 2026-04-10 00:54:19 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:54:22.936633 | orchestrator | 2026-04-10 00:54:22 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:54:22.937592 | orchestrator | 2026-04-10 00:54:22 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:54:22.939126 | orchestrator | 2026-04-10 00:54:22 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:54:22.939162 | orchestrator | 2026-04-10 00:54:22 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:54:25.982675 | orchestrator | 2026-04-10 00:54:25 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:54:25.983533 | orchestrator | 2026-04-10 00:54:25 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:54:25.985582 | orchestrator | 2026-04-10 00:54:25 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:54:25.985652 | orchestrator | 2026-04-10 00:54:25 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:54:29.040644 | orchestrator | 2026-04-10 00:54:29 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:54:29.043425 | orchestrator | 2026-04-10 00:54:29 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:54:29.046912 | orchestrator | 2026-04-10 00:54:29 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:54:29.046998 | orchestrator | 2026-04-10 00:54:29 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:54:32.114258 | orchestrator | 2026-04-10 00:54:32 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:54:32.115711 | orchestrator | 2026-04-10 00:54:32 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:54:32.119894 | orchestrator | 2026-04-10 00:54:32 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:54:32.119942 | orchestrator | 2026-04-10 00:54:32 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:54:35.164008 | orchestrator | 2026-04-10 00:54:35 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:54:35.167083 | orchestrator | 2026-04-10 00:54:35 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:54:35.169675 | orchestrator | 2026-04-10 00:54:35 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:54:35.169727 | orchestrator | 2026-04-10 00:54:35 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:54:38.213396 | orchestrator | 2026-04-10 00:54:38 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:54:38.214427 | orchestrator | 2026-04-10 00:54:38 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:54:38.219398 | orchestrator | 2026-04-10 00:54:38 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:54:38.219455 | orchestrator | 2026-04-10 00:54:38 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:54:41.276509 | orchestrator | 2026-04-10 00:54:41 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:54:41.277447 | orchestrator | 2026-04-10 00:54:41 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:54:41.279410 | orchestrator | 2026-04-10 00:54:41 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:54:41.279445 | orchestrator | 2026-04-10 00:54:41 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:54:44.324996 | orchestrator | 2026-04-10 00:54:44 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:54:44.327510 | orchestrator | 2026-04-10 00:54:44 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:54:44.328050 | orchestrator | 2026-04-10 00:54:44 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:54:44.328138 | orchestrator | 2026-04-10 00:54:44 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:54:47.365436 | orchestrator | 2026-04-10 00:54:47 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:54:47.367234 | orchestrator | 2026-04-10 00:54:47 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:54:47.369704 | orchestrator | 2026-04-10 00:54:47 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:54:47.369756 | orchestrator | 2026-04-10 00:54:47 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:54:50.414815 | orchestrator | 2026-04-10 00:54:50 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:54:50.417729 | orchestrator | 2026-04-10 00:54:50 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:54:50.422418 | orchestrator | 2026-04-10 00:54:50 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:54:50.422469 | orchestrator | 2026-04-10 00:54:50 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:54:53.462238 | orchestrator | 2026-04-10 00:54:53 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:54:53.465303 | orchestrator | 2026-04-10 00:54:53 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:54:53.468594 | orchestrator | 2026-04-10 00:54:53 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state STARTED 2026-04-10 00:54:53.469021 | orchestrator | 2026-04-10 00:54:53 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:54:56.520190 | orchestrator | 2026-04-10 00:54:56 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:54:56.524780 | orchestrator | 2026-04-10 00:54:56 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:54:56.530773 | orchestrator | 2026-04-10 00:54:56 | INFO  | Task 7674a3b5-3522-4886-9fdf-f5455829d4d1 is in state SUCCESS 2026-04-10 00:54:56.532071 | orchestrator | 2026-04-10 00:54:56.532122 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-10 00:54:56.532130 | orchestrator | 2.16.14 2026-04-10 00:54:56.532136 | orchestrator | 2026-04-10 00:54:56.532141 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-04-10 00:54:56.532147 | orchestrator | 2026-04-10 00:54:56.532152 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-10 00:54:56.532157 | orchestrator | Friday 10 April 2026 00:44:35 +0000 (0:00:00.713) 0:00:00.713 ********** 2026-04-10 00:54:56.532163 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:54:56.532169 | orchestrator | 2026-04-10 00:54:56.532174 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-10 00:54:56.532178 | orchestrator | Friday 10 April 2026 00:44:36 +0000 (0:00:01.117) 0:00:01.831 ********** 2026-04-10 00:54:56.532183 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.532187 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.532192 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.532196 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.532200 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.532203 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.532207 | orchestrator | 2026-04-10 00:54:56.532211 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-10 00:54:56.532215 | orchestrator | Friday 10 April 2026 00:44:38 +0000 (0:00:01.842) 0:00:03.675 ********** 2026-04-10 00:54:56.532219 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.532223 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.532226 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.532230 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.532234 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.532238 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.532242 | orchestrator | 2026-04-10 00:54:56.532246 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-10 00:54:56.532267 | orchestrator | Friday 10 April 2026 00:44:38 +0000 (0:00:00.907) 0:00:04.583 ********** 2026-04-10 00:54:56.532307 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.532313 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.532320 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.532326 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.532331 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.532337 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.532343 | orchestrator | 2026-04-10 00:54:56.532349 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-10 00:54:56.532355 | orchestrator | Friday 10 April 2026 00:44:39 +0000 (0:00:00.784) 0:00:05.367 ********** 2026-04-10 00:54:56.532359 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.532363 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.532366 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.532370 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.532374 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.532378 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.532381 | orchestrator | 2026-04-10 00:54:56.532385 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-10 00:54:56.532389 | orchestrator | Friday 10 April 2026 00:44:40 +0000 (0:00:00.691) 0:00:06.059 ********** 2026-04-10 00:54:56.532393 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.532396 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.532400 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.532404 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.532408 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.532411 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.532415 | orchestrator | 2026-04-10 00:54:56.532419 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-10 00:54:56.532423 | orchestrator | Friday 10 April 2026 00:44:41 +0000 (0:00:00.581) 0:00:06.641 ********** 2026-04-10 00:54:56.532437 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.532441 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.532445 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.532455 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.532459 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.532462 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.532466 | orchestrator | 2026-04-10 00:54:56.532470 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-10 00:54:56.532474 | orchestrator | Friday 10 April 2026 00:44:42 +0000 (0:00:01.626) 0:00:08.267 ********** 2026-04-10 00:54:56.532478 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.532483 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.532487 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.532491 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.532495 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.532498 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.532502 | orchestrator | 2026-04-10 00:54:56.532517 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-10 00:54:56.532521 | orchestrator | Friday 10 April 2026 00:44:43 +0000 (0:00:00.708) 0:00:08.976 ********** 2026-04-10 00:54:56.532525 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.532529 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.532532 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.532536 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.532540 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.532544 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.532547 | orchestrator | 2026-04-10 00:54:56.532551 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-10 00:54:56.532555 | orchestrator | Friday 10 April 2026 00:44:44 +0000 (0:00:01.139) 0:00:10.115 ********** 2026-04-10 00:54:56.532559 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-10 00:54:56.532562 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-10 00:54:56.532566 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-10 00:54:56.532570 | orchestrator | 2026-04-10 00:54:56.532581 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-10 00:54:56.532585 | orchestrator | Friday 10 April 2026 00:44:45 +0000 (0:00:00.823) 0:00:10.939 ********** 2026-04-10 00:54:56.532588 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.532592 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.532596 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.532599 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.532614 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.532618 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.532622 | orchestrator | 2026-04-10 00:54:56.532626 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-10 00:54:56.532630 | orchestrator | Friday 10 April 2026 00:44:47 +0000 (0:00:01.812) 0:00:12.752 ********** 2026-04-10 00:54:56.532633 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-10 00:54:56.532637 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-10 00:54:56.532641 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-10 00:54:56.532645 | orchestrator | 2026-04-10 00:54:56.532648 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-10 00:54:56.532652 | orchestrator | Friday 10 April 2026 00:44:49 +0000 (0:00:02.819) 0:00:15.572 ********** 2026-04-10 00:54:56.532656 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-10 00:54:56.532660 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-10 00:54:56.532663 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-10 00:54:56.532667 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.532671 | orchestrator | 2026-04-10 00:54:56.532674 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-10 00:54:56.532678 | orchestrator | Friday 10 April 2026 00:44:50 +0000 (0:00:00.902) 0:00:16.474 ********** 2026-04-10 00:54:56.532685 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.532691 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.532695 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.532699 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.532703 | orchestrator | 2026-04-10 00:54:56.532707 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-10 00:54:56.532711 | orchestrator | Friday 10 April 2026 00:44:52 +0000 (0:00:01.381) 0:00:17.856 ********** 2026-04-10 00:54:56.532717 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.532722 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.532730 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.532739 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.532743 | orchestrator | 2026-04-10 00:54:56.532746 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-10 00:54:56.532750 | orchestrator | Friday 10 April 2026 00:44:53 +0000 (0:00:01.292) 0:00:19.149 ********** 2026-04-10 00:54:56.532759 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-10 00:44:47.800682', 'end': '2026-04-10 00:44:47.895547', 'delta': '0:00:00.094865', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.532766 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-10 00:44:48.431466', 'end': '2026-04-10 00:44:48.532090', 'delta': '0:00:00.100624', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.532770 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-10 00:44:49.483904', 'end': '2026-04-10 00:44:49.589313', 'delta': '0:00:00.105409', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.532774 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.532778 | orchestrator | 2026-04-10 00:54:56.532782 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-10 00:54:56.532786 | orchestrator | Friday 10 April 2026 00:44:54 +0000 (0:00:00.632) 0:00:19.781 ********** 2026-04-10 00:54:56.532789 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.532793 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.532797 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.532801 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.532804 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.532808 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.532812 | orchestrator | 2026-04-10 00:54:56.532815 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-10 00:54:56.532819 | orchestrator | Friday 10 April 2026 00:44:56 +0000 (0:00:02.457) 0:00:22.239 ********** 2026-04-10 00:54:56.532823 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.532827 | orchestrator | 2026-04-10 00:54:56.532830 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-10 00:54:56.532838 | orchestrator | Friday 10 April 2026 00:44:57 +0000 (0:00:00.940) 0:00:23.179 ********** 2026-04-10 00:54:56.532842 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.532846 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.532849 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.532853 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.532857 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.532861 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.532864 | orchestrator | 2026-04-10 00:54:56.532868 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-10 00:54:56.532872 | orchestrator | Friday 10 April 2026 00:44:58 +0000 (0:00:01.039) 0:00:24.219 ********** 2026-04-10 00:54:56.532876 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.532879 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.532883 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.532887 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.532890 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.532894 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.532898 | orchestrator | 2026-04-10 00:54:56.532901 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-10 00:54:56.532908 | orchestrator | Friday 10 April 2026 00:44:59 +0000 (0:00:01.244) 0:00:25.464 ********** 2026-04-10 00:54:56.532912 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.532916 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.532920 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.532924 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.532927 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.532931 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.532935 | orchestrator | 2026-04-10 00:54:56.532939 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-10 00:54:56.532943 | orchestrator | Friday 10 April 2026 00:45:00 +0000 (0:00:00.767) 0:00:26.231 ********** 2026-04-10 00:54:56.532946 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.532950 | orchestrator | 2026-04-10 00:54:56.532956 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-10 00:54:56.532962 | orchestrator | Friday 10 April 2026 00:45:00 +0000 (0:00:00.100) 0:00:26.332 ********** 2026-04-10 00:54:56.532968 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.532975 | orchestrator | 2026-04-10 00:54:56.532980 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-10 00:54:56.532986 | orchestrator | Friday 10 April 2026 00:45:00 +0000 (0:00:00.152) 0:00:26.484 ********** 2026-04-10 00:54:56.532991 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.532997 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.533003 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.533008 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.533014 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.533059 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.533065 | orchestrator | 2026-04-10 00:54:56.533149 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-10 00:54:56.533157 | orchestrator | Friday 10 April 2026 00:45:01 +0000 (0:00:00.492) 0:00:26.976 ********** 2026-04-10 00:54:56.533301 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.533314 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.533320 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.533327 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.533333 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.533338 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.533344 | orchestrator | 2026-04-10 00:54:56.533351 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-10 00:54:56.533357 | orchestrator | Friday 10 April 2026 00:45:02 +0000 (0:00:01.047) 0:00:28.024 ********** 2026-04-10 00:54:56.533363 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.533379 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.533385 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.533389 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.533392 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.533396 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.533400 | orchestrator | 2026-04-10 00:54:56.533404 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-10 00:54:56.533408 | orchestrator | Friday 10 April 2026 00:45:02 +0000 (0:00:00.512) 0:00:28.536 ********** 2026-04-10 00:54:56.533411 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.533415 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.533419 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.533423 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.533427 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.533430 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.533434 | orchestrator | 2026-04-10 00:54:56.533438 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-10 00:54:56.533442 | orchestrator | Friday 10 April 2026 00:45:03 +0000 (0:00:00.752) 0:00:29.289 ********** 2026-04-10 00:54:56.533446 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.533450 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.533454 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.533458 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.533462 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.533466 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.533469 | orchestrator | 2026-04-10 00:54:56.533473 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-10 00:54:56.533479 | orchestrator | Friday 10 April 2026 00:45:04 +0000 (0:00:00.636) 0:00:29.925 ********** 2026-04-10 00:54:56.533485 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.533494 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.533501 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.533507 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.533513 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.533519 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.533525 | orchestrator | 2026-04-10 00:54:56.533530 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-10 00:54:56.533536 | orchestrator | Friday 10 April 2026 00:45:05 +0000 (0:00:00.860) 0:00:30.786 ********** 2026-04-10 00:54:56.533542 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.533547 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.533553 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.533559 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.533565 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.533571 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.533577 | orchestrator | 2026-04-10 00:54:56.533583 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-10 00:54:56.533589 | orchestrator | Friday 10 April 2026 00:45:05 +0000 (0:00:00.642) 0:00:31.428 ********** 2026-04-10 00:54:56.533596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.533607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.533617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.533668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.533857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.533867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.533871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.533875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.533879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.533884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.533900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6703ea0b-6978-4dc9-b5ac-852738c6c355', 'scsi-SQEMU_QEMU_HARDDISK_6703ea0b-6978-4dc9-b5ac-852738c6c355'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6703ea0b-6978-4dc9-b5ac-852738c6c355-part1', 'scsi-SQEMU_QEMU_HARDDISK_6703ea0b-6978-4dc9-b5ac-852738c6c355-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6703ea0b-6978-4dc9-b5ac-852738c6c355-part14', 'scsi-SQEMU_QEMU_HARDDISK_6703ea0b-6978-4dc9-b5ac-852738c6c355-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6703ea0b-6978-4dc9-b5ac-852738c6c355-part15', 'scsi-SQEMU_QEMU_HARDDISK_6703ea0b-6978-4dc9-b5ac-852738c6c355-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6703ea0b-6978-4dc9-b5ac-852738c6c355-part16', 'scsi-SQEMU_QEMU_HARDDISK_6703ea0b-6978-4dc9-b5ac-852738c6c355-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-10 00:54:56.533912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-10-00-03-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-10 00:54:56.533917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.533922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.533926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.533930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.533941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.533945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.533953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.533958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.533962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.533965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.533970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.533974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.533978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.533992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f24fb99-990b-48ee-9c5e-76fec810005b', 'scsi-SQEMU_QEMU_HARDDISK_5f24fb99-990b-48ee-9c5e-76fec810005b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f24fb99-990b-48ee-9c5e-76fec810005b-part1', 'scsi-SQEMU_QEMU_HARDDISK_5f24fb99-990b-48ee-9c5e-76fec810005b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f24fb99-990b-48ee-9c5e-76fec810005b-part14', 'scsi-SQEMU_QEMU_HARDDISK_5f24fb99-990b-48ee-9c5e-76fec810005b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f24fb99-990b-48ee-9c5e-76fec810005b-part15', 'scsi-SQEMU_QEMU_HARDDISK_5f24fb99-990b-48ee-9c5e-76fec810005b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f24fb99-990b-48ee-9c5e-76fec810005b-part16', 'scsi-SQEMU_QEMU_HARDDISK_5f24fb99-990b-48ee-9c5e-76fec810005b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-10 00:54:56.534000 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.534004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.534011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00f505f9-c68a-4ecb-966e-715e991ccb80', 'scsi-SQEMU_QEMU_HARDDISK_00f505f9-c68a-4ecb-966e-715e991ccb80'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00f505f9-c68a-4ecb-966e-715e991ccb80-part1', 'scsi-SQEMU_QEMU_HARDDISK_00f505f9-c68a-4ecb-966e-715e991ccb80-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00f505f9-c68a-4ecb-966e-715e991ccb80-part14', 'scsi-SQEMU_QEMU_HARDDISK_00f505f9-c68a-4ecb-966e-715e991ccb80-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00f505f9-c68a-4ecb-966e-715e991ccb80-part15', 'scsi-SQEMU_QEMU_HARDDISK_00f505f9-c68a-4ecb-966e-715e991ccb80-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00f505f9-c68a-4ecb-966e-715e991ccb80-part16', 'scsi-SQEMU_QEMU_HARDDISK_00f505f9-c68a-4ecb-966e-715e991ccb80-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-10 00:54:56.534133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-10-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-10 00:54:56.534150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-10-00-03-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-10 00:54:56.534156 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4a24d887--4b45--578e--8445--fe6f68cb2659-osd--block--4a24d887--4b45--578e--8445--fe6f68cb2659', 'dm-uuid-LVM-HmBRIWxGLI3EGV6kV75sVNxgbPSB5omXHrIQIzDei9cb7WRNNAqcgK7AytWK3YKZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.534163 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--83f5954c--7956--54fb--af17--18f84b92edf0-osd--block--83f5954c--7956--54fb--af17--18f84b92edf0', 'dm-uuid-LVM-8Osw97PfL7yOFGzOX4qgZueyeAhWhOmOkCSzx6ohrwri6Ap1yw3bOZM3asUyFbv6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.534170 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.534176 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.534182 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.534196 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.534569 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.534597 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.534605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.534618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.534625 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.534633 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae', 'scsi-SQEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae-part1', 'scsi-SQEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae-part14', 'scsi-SQEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae-part15', 'scsi-SQEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae-part16', 'scsi-SQEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-10 00:54:56.534658 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4a24d887--4b45--578e--8445--fe6f68cb2659-osd--block--4a24d887--4b45--578e--8445--fe6f68cb2659'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JP9aDr-LzDf-aWue-EhD0-vBcD-llKo-fbqbH0', 'scsi-0QEMU_QEMU_HARDDISK_7df1152f-d9d4-4643-860e-92853d20f14a', 'scsi-SQEMU_QEMU_HARDDISK_7df1152f-d9d4-4643-860e-92853d20f14a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-10 00:54:56.534671 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--83f5954c--7956--54fb--af17--18f84b92edf0-osd--block--83f5954c--7956--54fb--af17--18f84b92edf0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NoFq0I-grCm-XrVi-NRfm-Ddwc-OPpb-h3TY7p', 'scsi-0QEMU_QEMU_HARDDISK_c799235e-1f4d-413e-847e-76a649e6822e', 'scsi-SQEMU_QEMU_HARDDISK_c799235e-1f4d-413e-847e-76a649e6822e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-10 00:54:56.534677 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_83ad9ae7-b217-4c7b-97e6-a7d535a7d755', 'scsi-SQEMU_QEMU_HARDDISK_83ad9ae7-b217-4c7b-97e6-a7d535a7d755'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-10 00:54:56.534682 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-10-00-03-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-10 00:54:56.534686 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--465b2d07--90ab--575b--b156--9a24eede9b64-osd--block--465b2d07--90ab--575b--b156--9a24eede9b64', 'dm-uuid-LVM-sz21BL9rKXHUXi7MHvzuiuEYOO4GuVHIzP8DshAgnCVBbJYANYokb9PpLuHhy1UX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.534694 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a684d377--5ec1--594b--83a4--e92528b1ce81-osd--block--a684d377--5ec1--594b--83a4--e92528b1ce81', 'dm-uuid-LVM-bLjMtbKkcMY1XBDWcSo4rp9t2ScEoyS6X4oShYcxTkNtN21H8kUDn4qODgM2cnva'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.534701 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.534705 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.534709 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.534715 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.534719 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.534723 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.534727 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.534731 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.534735 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.534742 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.534753 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762', 'scsi-SQEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762-part1', 'scsi-SQEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762-part14', 'scsi-SQEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762-part15', 'scsi-SQEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762-part16', 'scsi-SQEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-10 00:54:56.534758 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--465b2d07--90ab--575b--b156--9a24eede9b64-osd--block--465b2d07--90ab--575b--b156--9a24eede9b64'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TLAoeq-QGKe-um9n-KAtM-mSIj-yfND-V0D9P1', 'scsi-0QEMU_QEMU_HARDDISK_abddbba1-0dc8-4b4d-8c33-018af0530e23', 'scsi-SQEMU_QEMU_HARDDISK_abddbba1-0dc8-4b4d-8c33-018af0530e23'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-10 00:54:56.534762 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a684d377--5ec1--594b--83a4--e92528b1ce81-osd--block--a684d377--5ec1--594b--83a4--e92528b1ce81'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-k3QkEJ-MlaZ-9m4I-xd3v-1d2l-iFuh-tq8K6c', 'scsi-0QEMU_QEMU_HARDDISK_02e5e60d-aa8c-49f3-b265-76760abc52dd', 'scsi-SQEMU_QEMU_HARDDISK_02e5e60d-aa8c-49f3-b265-76760abc52dd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-10 00:54:56.534772 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42dd6803-c84e-4757-aa8c-571b5d9cbc16', 'scsi-SQEMU_QEMU_HARDDISK_42dd6803-c84e-4757-aa8c-571b5d9cbc16'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-10 00:54:56.534776 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-10-00-03-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-10 00:54:56.534780 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.534787 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--09201c46--e11a--5302--956e--912d17e7f9de-osd--block--09201c46--e11a--5302--956e--912d17e7f9de', 'dm-uuid-LVM-PftoxsgQ52yvPmleTAKNa8K0ekniLGTm5on5NexEjUZz0vte28H1F0vq32VvM5pA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.534794 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0863171e--1302--565f--bee5--d18b6804a785-osd--block--0863171e--1302--565f--bee5--d18b6804a785', 'dm-uuid-LVM-eAqCUQR6qtojDcHqiCNGictIJZdU25jm3vNbBEnjKWJSAV63nUJ3xPpJV0I5T8w0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.534798 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.534802 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.534806 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.534814 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.534818 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.534822 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.534826 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.534835 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:54:56.534843 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21', 'scsi-SQEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21-part1', 'scsi-SQEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21-part14', 'scsi-SQEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21-part15', 'scsi-SQEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21-part16', 'scsi-SQEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-10 00:54:56.534851 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--09201c46--e11a--5302--956e--912d17e7f9de-osd--block--09201c46--e11a--5302--956e--912d17e7f9de'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gSDeM1-SD9t-OsNo-wjZN-B14N-pftC-NP9cBN', 'scsi-0QEMU_QEMU_HARDDISK_a4e1216f-fa74-4126-b451-31b29817bdec', 'scsi-SQEMU_QEMU_HARDDISK_a4e1216f-fa74-4126-b451-31b29817bdec'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-10 00:54:56.534855 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--0863171e--1302--565f--bee5--d18b6804a785-osd--block--0863171e--1302--565f--bee5--d18b6804a785'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-voZ47o-niq9-fm1G-HLxA-Byj8-Cq3I-INaUdT', 'scsi-0QEMU_QEMU_HARDDISK_9b5f2139-44b1-4420-a83a-35d7b8e164cf', 'scsi-SQEMU_QEMU_HARDDISK_9b5f2139-44b1-4420-a83a-35d7b8e164cf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-10 00:54:56.534862 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_433cfae2-239d-480b-959d-b8cd36270ab8', 'scsi-SQEMU_QEMU_HARDDISK_433cfae2-239d-480b-959d-b8cd36270ab8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-10 00:54:56.534870 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-10-00-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-10 00:54:56.534874 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.534878 | orchestrator | 2026-04-10 00:54:56.534883 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-10 00:54:56.534887 | orchestrator | Friday 10 April 2026 00:45:07 +0000 (0:00:01.825) 0:00:33.254 ********** 2026-04-10 00:54:56.534891 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.534899 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.534903 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.534907 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.534913 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.534917 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.534924 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.534928 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.534934 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.534938 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.534942 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.534952 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6703ea0b-6978-4dc9-b5ac-852738c6c355', 'scsi-SQEMU_QEMU_HARDDISK_6703ea0b-6978-4dc9-b5ac-852738c6c355'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6703ea0b-6978-4dc9-b5ac-852738c6c355-part1', 'scsi-SQEMU_QEMU_HARDDISK_6703ea0b-6978-4dc9-b5ac-852738c6c355-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6703ea0b-6978-4dc9-b5ac-852738c6c355-part14', 'scsi-SQEMU_QEMU_HARDDISK_6703ea0b-6978-4dc9-b5ac-852738c6c355-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6703ea0b-6978-4dc9-b5ac-852738c6c355-part15', 'scsi-SQEMU_QEMU_HARDDISK_6703ea0b-6978-4dc9-b5ac-852738c6c355-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6703ea0b-6978-4dc9-b5ac-852738c6c355-part16', 'scsi-SQEMU_QEMU_HARDDISK_6703ea0b-6978-4dc9-b5ac-852738c6c355-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.534960 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.534965 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.534972 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-10-00-03-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.534976 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.534984 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.534988 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.534998 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f24fb99-990b-48ee-9c5e-76fec810005b', 'scsi-SQEMU_QEMU_HARDDISK_5f24fb99-990b-48ee-9c5e-76fec810005b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f24fb99-990b-48ee-9c5e-76fec810005b-part1', 'scsi-SQEMU_QEMU_HARDDISK_5f24fb99-990b-48ee-9c5e-76fec810005b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f24fb99-990b-48ee-9c5e-76fec810005b-part14', 'scsi-SQEMU_QEMU_HARDDISK_5f24fb99-990b-48ee-9c5e-76fec810005b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f24fb99-990b-48ee-9c5e-76fec810005b-part15', 'scsi-SQEMU_QEMU_HARDDISK_5f24fb99-990b-48ee-9c5e-76fec810005b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5f24fb99-990b-48ee-9c5e-76fec810005b-part16', 'scsi-SQEMU_QEMU_HARDDISK_5f24fb99-990b-48ee-9c5e-76fec810005b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535003 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-10-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535009 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535179 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535187 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535191 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535195 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535202 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535210 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535217 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535224 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00f505f9-c68a-4ecb-966e-715e991ccb80', 'scsi-SQEMU_QEMU_HARDDISK_00f505f9-c68a-4ecb-966e-715e991ccb80'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00f505f9-c68a-4ecb-966e-715e991ccb80-part1', 'scsi-SQEMU_QEMU_HARDDISK_00f505f9-c68a-4ecb-966e-715e991ccb80-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00f505f9-c68a-4ecb-966e-715e991ccb80-part14', 'scsi-SQEMU_QEMU_HARDDISK_00f505f9-c68a-4ecb-966e-715e991ccb80-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00f505f9-c68a-4ecb-966e-715e991ccb80-part15', 'scsi-SQEMU_QEMU_HARDDISK_00f505f9-c68a-4ecb-966e-715e991ccb80-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00f505f9-c68a-4ecb-966e-715e991ccb80-part16', 'scsi-SQEMU_QEMU_HARDDISK_00f505f9-c68a-4ecb-966e-715e991ccb80-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535228 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-10-00-03-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535232 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.535240 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4a24d887--4b45--578e--8445--fe6f68cb2659-osd--block--4a24d887--4b45--578e--8445--fe6f68cb2659', 'dm-uuid-LVM-HmBRIWxGLI3EGV6kV75sVNxgbPSB5omXHrIQIzDei9cb7WRNNAqcgK7AytWK3YKZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535317 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--83f5954c--7956--54fb--af17--18f84b92edf0-osd--block--83f5954c--7956--54fb--af17--18f84b92edf0', 'dm-uuid-LVM-8Osw97PfL7yOFGzOX4qgZueyeAhWhOmOkCSzx6ohrwri6Ap1yw3bOZM3asUyFbv6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535322 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535327 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535334 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535338 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535347 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535355 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535359 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535363 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535374 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae', 'scsi-SQEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae-part1', 'scsi-SQEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae-part14', 'scsi-SQEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae-part15', 'scsi-SQEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae-part16', 'scsi-SQEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535383 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4a24d887--4b45--578e--8445--fe6f68cb2659-osd--block--4a24d887--4b45--578e--8445--fe6f68cb2659'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JP9aDr-LzDf-aWue-EhD0-vBcD-llKo-fbqbH0', 'scsi-0QEMU_QEMU_HARDDISK_7df1152f-d9d4-4643-860e-92853d20f14a', 'scsi-SQEMU_QEMU_HARDDISK_7df1152f-d9d4-4643-860e-92853d20f14a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535388 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--83f5954c--7956--54fb--af17--18f84b92edf0-osd--block--83f5954c--7956--54fb--af17--18f84b92edf0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NoFq0I-grCm-XrVi-NRfm-Ddwc-OPpb-h3TY7p', 'scsi-0QEMU_QEMU_HARDDISK_c799235e-1f4d-413e-847e-76a649e6822e', 'scsi-SQEMU_QEMU_HARDDISK_c799235e-1f4d-413e-847e-76a649e6822e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535395 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_83ad9ae7-b217-4c7b-97e6-a7d535a7d755', 'scsi-SQEMU_QEMU_HARDDISK_83ad9ae7-b217-4c7b-97e6-a7d535a7d755'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535400 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-10-00-03-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535407 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.535414 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--465b2d07--90ab--575b--b156--9a24eede9b64-osd--block--465b2d07--90ab--575b--b156--9a24eede9b64', 'dm-uuid-LVM-sz21BL9rKXHUXi7MHvzuiuEYOO4GuVHIzP8DshAgnCVBbJYANYokb9PpLuHhy1UX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535418 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a684d377--5ec1--594b--83a4--e92528b1ce81-osd--block--a684d377--5ec1--594b--83a4--e92528b1ce81', 'dm-uuid-LVM-bLjMtbKkcMY1XBDWcSo4rp9t2ScEoyS6X4oShYcxTkNtN21H8kUDn4qODgM2cnva'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535422 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535426 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535434 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535440 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535457 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535466 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535472 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535479 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535492 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762', 'scsi-SQEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762-part1', 'scsi-SQEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762-part14', 'scsi-SQEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762-part15', 'scsi-SQEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762-part16', 'scsi-SQEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535504 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--465b2d07--90ab--575b--b156--9a24eede9b64-osd--block--465b2d07--90ab--575b--b156--9a24eede9b64'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TLAoeq-QGKe-um9n-KAtM-mSIj-yfND-V0D9P1', 'scsi-0QEMU_QEMU_HARDDISK_abddbba1-0dc8-4b4d-8c33-018af0530e23', 'scsi-SQEMU_QEMU_HARDDISK_abddbba1-0dc8-4b4d-8c33-018af0530e23'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535511 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a684d377--5ec1--594b--83a4--e92528b1ce81-osd--block--a684d377--5ec1--594b--83a4--e92528b1ce81'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-k3QkEJ-MlaZ-9m4I-xd3v-1d2l-iFuh-tq8K6c', 'scsi-0QEMU_QEMU_HARDDISK_02e5e60d-aa8c-49f3-b265-76760abc52dd', 'scsi-SQEMU_QEMU_HARDDISK_02e5e60d-aa8c-49f3-b265-76760abc52dd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535518 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42dd6803-c84e-4757-aa8c-571b5d9cbc16', 'scsi-SQEMU_QEMU_HARDDISK_42dd6803-c84e-4757-aa8c-571b5d9cbc16'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535553 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-10-00-03-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535565 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.535572 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.535578 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.535924 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--09201c46--e11a--5302--956e--912d17e7f9de-osd--block--09201c46--e11a--5302--956e--912d17e7f9de', 'dm-uuid-LVM-PftoxsgQ52yvPmleTAKNa8K0ekniLGTm5on5NexEjUZz0vte28H1F0vq32VvM5pA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535943 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0863171e--1302--565f--bee5--d18b6804a785-osd--block--0863171e--1302--565f--bee5--d18b6804a785', 'dm-uuid-LVM-eAqCUQR6qtojDcHqiCNGictIJZdU25jm3vNbBEnjKWJSAV63nUJ3xPpJV0I5T8w0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535948 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535952 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535961 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535973 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535989 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535993 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.535997 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.536001 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.536022 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21', 'scsi-SQEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21-part1', 'scsi-SQEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21-part14', 'scsi-SQEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21-part15', 'scsi-SQEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21-part16', 'scsi-SQEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.536032 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--09201c46--e11a--5302--956e--912d17e7f9de-osd--block--09201c46--e11a--5302--956e--912d17e7f9de'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gSDeM1-SD9t-OsNo-wjZN-B14N-pftC-NP9cBN', 'scsi-0QEMU_QEMU_HARDDISK_a4e1216f-fa74-4126-b451-31b29817bdec', 'scsi-SQEMU_QEMU_HARDDISK_a4e1216f-fa74-4126-b451-31b29817bdec'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.536036 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--0863171e--1302--565f--bee5--d18b6804a785-osd--block--0863171e--1302--565f--bee5--d18b6804a785'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-voZ47o-niq9-fm1G-HLxA-Byj8-Cq3I-INaUdT', 'scsi-0QEMU_QEMU_HARDDISK_9b5f2139-44b1-4420-a83a-35d7b8e164cf', 'scsi-SQEMU_QEMU_HARDDISK_9b5f2139-44b1-4420-a83a-35d7b8e164cf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.536043 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_433cfae2-239d-480b-959d-b8cd36270ab8', 'scsi-SQEMU_QEMU_HARDDISK_433cfae2-239d-480b-959d-b8cd36270ab8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.536050 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-10-00-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:54:56.536055 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.536059 | orchestrator | 2026-04-10 00:54:56.536062 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-10 00:54:56.536067 | orchestrator | Friday 10 April 2026 00:45:09 +0000 (0:00:01.615) 0:00:34.869 ********** 2026-04-10 00:54:56.536080 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.536109 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.536114 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.536117 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.536121 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.536125 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.536129 | orchestrator | 2026-04-10 00:54:56.536133 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-10 00:54:56.536137 | orchestrator | Friday 10 April 2026 00:45:10 +0000 (0:00:01.073) 0:00:35.943 ********** 2026-04-10 00:54:56.536140 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.536144 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.536148 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.536151 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.536155 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.536159 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.536162 | orchestrator | 2026-04-10 00:54:56.536166 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-10 00:54:56.536170 | orchestrator | Friday 10 April 2026 00:45:11 +0000 (0:00:00.741) 0:00:36.685 ********** 2026-04-10 00:54:56.536174 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.536177 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.536208 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.536212 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.536245 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.536271 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.536278 | orchestrator | 2026-04-10 00:54:56.536396 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-10 00:54:56.536401 | orchestrator | Friday 10 April 2026 00:45:12 +0000 (0:00:00.995) 0:00:37.680 ********** 2026-04-10 00:54:56.536405 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.536408 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.536412 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.536416 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.536420 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.536423 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.536427 | orchestrator | 2026-04-10 00:54:56.536431 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-10 00:54:56.536435 | orchestrator | Friday 10 April 2026 00:45:12 +0000 (0:00:00.612) 0:00:38.292 ********** 2026-04-10 00:54:56.536439 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.536457 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.536461 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.536464 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.536468 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.536472 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.536476 | orchestrator | 2026-04-10 00:54:56.536480 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-10 00:54:56.536484 | orchestrator | Friday 10 April 2026 00:45:13 +0000 (0:00:00.808) 0:00:39.101 ********** 2026-04-10 00:54:56.536487 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.536491 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.536495 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.536498 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.536502 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.536506 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.536510 | orchestrator | 2026-04-10 00:54:56.536513 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-10 00:54:56.536517 | orchestrator | Friday 10 April 2026 00:45:14 +0000 (0:00:01.006) 0:00:40.108 ********** 2026-04-10 00:54:56.536521 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-10 00:54:56.536526 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-10 00:54:56.536529 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-10 00:54:56.536533 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-10 00:54:56.536537 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-10 00:54:56.536541 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-10 00:54:56.536545 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-10 00:54:56.536548 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-10 00:54:56.536552 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-10 00:54:56.536556 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-10 00:54:56.536559 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-10 00:54:56.536563 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-10 00:54:56.536567 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-10 00:54:56.536574 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-10 00:54:56.536578 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-10 00:54:56.536581 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-10 00:54:56.536585 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-10 00:54:56.536589 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-10 00:54:56.536593 | orchestrator | 2026-04-10 00:54:56.536596 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-10 00:54:56.536600 | orchestrator | Friday 10 April 2026 00:45:19 +0000 (0:00:04.775) 0:00:44.884 ********** 2026-04-10 00:54:56.536604 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-10 00:54:56.536608 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-10 00:54:56.536612 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-10 00:54:56.536616 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.536619 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-10 00:54:56.536623 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-10 00:54:56.536627 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-10 00:54:56.536631 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-10 00:54:56.536634 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.536638 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-10 00:54:56.536690 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-10 00:54:56.536696 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-10 00:54:56.536704 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-10 00:54:56.536708 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-10 00:54:56.536711 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.536715 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-10 00:54:56.536759 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-10 00:54:56.536764 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.536767 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-10 00:54:56.536794 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.536800 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-10 00:54:56.536803 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-10 00:54:56.536807 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-10 00:54:56.536811 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.536814 | orchestrator | 2026-04-10 00:54:56.536818 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-10 00:54:56.536822 | orchestrator | Friday 10 April 2026 00:45:20 +0000 (0:00:01.030) 0:00:45.914 ********** 2026-04-10 00:54:56.536826 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.536830 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.536833 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.536838 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-5, testbed-node-4 2026-04-10 00:54:56.536842 | orchestrator | 2026-04-10 00:54:56.536846 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-10 00:54:56.536851 | orchestrator | Friday 10 April 2026 00:45:21 +0000 (0:00:01.692) 0:00:47.607 ********** 2026-04-10 00:54:56.536879 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.536883 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.536887 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.536891 | orchestrator | 2026-04-10 00:54:56.536994 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-10 00:54:56.537003 | orchestrator | Friday 10 April 2026 00:45:22 +0000 (0:00:00.542) 0:00:48.149 ********** 2026-04-10 00:54:56.537009 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.537014 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.537020 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.537025 | orchestrator | 2026-04-10 00:54:56.537031 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-10 00:54:56.537037 | orchestrator | Friday 10 April 2026 00:45:22 +0000 (0:00:00.386) 0:00:48.535 ********** 2026-04-10 00:54:56.537042 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.537048 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.537054 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.537059 | orchestrator | 2026-04-10 00:54:56.537066 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-10 00:54:56.537072 | orchestrator | Friday 10 April 2026 00:45:23 +0000 (0:00:00.442) 0:00:48.978 ********** 2026-04-10 00:54:56.537077 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.537084 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.537090 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.537095 | orchestrator | 2026-04-10 00:54:56.537101 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-10 00:54:56.537107 | orchestrator | Friday 10 April 2026 00:45:24 +0000 (0:00:01.066) 0:00:50.045 ********** 2026-04-10 00:54:56.537113 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-10 00:54:56.537119 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-10 00:54:56.537125 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-10 00:54:56.537131 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.537137 | orchestrator | 2026-04-10 00:54:56.537151 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-10 00:54:56.537158 | orchestrator | Friday 10 April 2026 00:45:25 +0000 (0:00:00.862) 0:00:50.907 ********** 2026-04-10 00:54:56.537164 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-10 00:54:56.537170 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-10 00:54:56.537213 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-10 00:54:56.537220 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.537226 | orchestrator | 2026-04-10 00:54:56.537233 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-10 00:54:56.537237 | orchestrator | Friday 10 April 2026 00:45:25 +0000 (0:00:00.386) 0:00:51.293 ********** 2026-04-10 00:54:56.537240 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-10 00:54:56.537244 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-10 00:54:56.537248 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-10 00:54:56.537271 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.537275 | orchestrator | 2026-04-10 00:54:56.537279 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-10 00:54:56.537283 | orchestrator | Friday 10 April 2026 00:45:26 +0000 (0:00:00.402) 0:00:51.696 ********** 2026-04-10 00:54:56.537286 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.537291 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.537294 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.537298 | orchestrator | 2026-04-10 00:54:56.537302 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-10 00:54:56.537306 | orchestrator | Friday 10 April 2026 00:45:26 +0000 (0:00:00.507) 0:00:52.203 ********** 2026-04-10 00:54:56.537309 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-10 00:54:56.537472 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-10 00:54:56.537479 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-10 00:54:56.537483 | orchestrator | 2026-04-10 00:54:56.537504 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-10 00:54:56.537508 | orchestrator | Friday 10 April 2026 00:45:28 +0000 (0:00:01.857) 0:00:54.061 ********** 2026-04-10 00:54:56.537512 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-10 00:54:56.537517 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-10 00:54:56.537521 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-10 00:54:56.537524 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-10 00:54:56.537528 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-10 00:54:56.537532 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-10 00:54:56.537536 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-10 00:54:56.537540 | orchestrator | 2026-04-10 00:54:56.537543 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-10 00:54:56.537548 | orchestrator | Friday 10 April 2026 00:45:29 +0000 (0:00:01.362) 0:00:55.423 ********** 2026-04-10 00:54:56.537551 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-10 00:54:56.537555 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-10 00:54:56.537559 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-10 00:54:56.537563 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-10 00:54:56.537567 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-10 00:54:56.537570 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-10 00:54:56.537574 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-10 00:54:56.537585 | orchestrator | 2026-04-10 00:54:56.537589 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-10 00:54:56.537592 | orchestrator | Friday 10 April 2026 00:45:31 +0000 (0:00:02.190) 0:00:57.614 ********** 2026-04-10 00:54:56.537596 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-4, testbed-node-3, testbed-node-5 2026-04-10 00:54:56.537601 | orchestrator | 2026-04-10 00:54:56.537605 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-10 00:54:56.537609 | orchestrator | Friday 10 April 2026 00:45:33 +0000 (0:00:01.514) 0:00:59.128 ********** 2026-04-10 00:54:56.537612 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:54:56.537616 | orchestrator | 2026-04-10 00:54:56.537620 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-10 00:54:56.537624 | orchestrator | Friday 10 April 2026 00:45:35 +0000 (0:00:01.590) 0:01:00.718 ********** 2026-04-10 00:54:56.537628 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.537632 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.537635 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.537639 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.537643 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.537647 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.537651 | orchestrator | 2026-04-10 00:54:56.537655 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-10 00:54:56.537658 | orchestrator | Friday 10 April 2026 00:45:36 +0000 (0:00:01.419) 0:01:02.138 ********** 2026-04-10 00:54:56.537662 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.537666 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.537669 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.537673 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.537677 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.537681 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.537684 | orchestrator | 2026-04-10 00:54:56.537688 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-10 00:54:56.537696 | orchestrator | Friday 10 April 2026 00:45:37 +0000 (0:00:01.414) 0:01:03.553 ********** 2026-04-10 00:54:56.537700 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.537703 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.537707 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.537711 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.537715 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.537719 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.537723 | orchestrator | 2026-04-10 00:54:56.537726 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-10 00:54:56.537730 | orchestrator | Friday 10 April 2026 00:45:39 +0000 (0:00:01.474) 0:01:05.027 ********** 2026-04-10 00:54:56.537734 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.537738 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.537744 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.537749 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.537755 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.537760 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.537764 | orchestrator | 2026-04-10 00:54:56.537768 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-10 00:54:56.537771 | orchestrator | Friday 10 April 2026 00:45:40 +0000 (0:00:01.528) 0:01:06.556 ********** 2026-04-10 00:54:56.537775 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.537779 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.537783 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.537787 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.537790 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.537798 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.537802 | orchestrator | 2026-04-10 00:54:56.537806 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-10 00:54:56.537831 | orchestrator | Friday 10 April 2026 00:45:41 +0000 (0:00:00.975) 0:01:07.531 ********** 2026-04-10 00:54:56.537841 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.537847 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.537853 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.537858 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.537864 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.537870 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.537877 | orchestrator | 2026-04-10 00:54:56.537882 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-10 00:54:56.537888 | orchestrator | Friday 10 April 2026 00:45:43 +0000 (0:00:01.314) 0:01:08.846 ********** 2026-04-10 00:54:56.537894 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.537900 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.537906 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.537912 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.537917 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.537924 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.537929 | orchestrator | 2026-04-10 00:54:56.537935 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-10 00:54:56.537941 | orchestrator | Friday 10 April 2026 00:45:43 +0000 (0:00:00.563) 0:01:09.409 ********** 2026-04-10 00:54:56.537946 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.537952 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.537958 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.537964 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.537970 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.537976 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.537980 | orchestrator | 2026-04-10 00:54:56.537983 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-10 00:54:56.537987 | orchestrator | Friday 10 April 2026 00:45:45 +0000 (0:00:01.386) 0:01:10.795 ********** 2026-04-10 00:54:56.537991 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.537995 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.537999 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.538003 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.538006 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.538010 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.538050 | orchestrator | 2026-04-10 00:54:56.538054 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-10 00:54:56.538059 | orchestrator | Friday 10 April 2026 00:45:46 +0000 (0:00:01.071) 0:01:11.867 ********** 2026-04-10 00:54:56.538062 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.538066 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.538070 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.538074 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.538078 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.538082 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.538085 | orchestrator | 2026-04-10 00:54:56.538089 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-10 00:54:56.538093 | orchestrator | Friday 10 April 2026 00:45:46 +0000 (0:00:00.642) 0:01:12.510 ********** 2026-04-10 00:54:56.538097 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.538101 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.538105 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.538109 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.538112 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.538116 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.538120 | orchestrator | 2026-04-10 00:54:56.538124 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-10 00:54:56.538128 | orchestrator | Friday 10 April 2026 00:45:47 +0000 (0:00:00.494) 0:01:13.005 ********** 2026-04-10 00:54:56.538138 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.538143 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.538147 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.538151 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.538156 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.538160 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.538164 | orchestrator | 2026-04-10 00:54:56.538168 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-10 00:54:56.538173 | orchestrator | Friday 10 April 2026 00:45:48 +0000 (0:00:00.652) 0:01:13.657 ********** 2026-04-10 00:54:56.538177 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.538181 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.538185 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.538190 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.538194 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.538199 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.538203 | orchestrator | 2026-04-10 00:54:56.538207 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-10 00:54:56.538215 | orchestrator | Friday 10 April 2026 00:45:48 +0000 (0:00:00.521) 0:01:14.178 ********** 2026-04-10 00:54:56.538219 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.538224 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.538228 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.538232 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.538236 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.538240 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.538245 | orchestrator | 2026-04-10 00:54:56.538361 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-10 00:54:56.538389 | orchestrator | Friday 10 April 2026 00:45:49 +0000 (0:00:00.672) 0:01:14.850 ********** 2026-04-10 00:54:56.538393 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.538398 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.538402 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.538407 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.538411 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.538415 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.538419 | orchestrator | 2026-04-10 00:54:56.538423 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-10 00:54:56.538427 | orchestrator | Friday 10 April 2026 00:45:49 +0000 (0:00:00.474) 0:01:15.325 ********** 2026-04-10 00:54:56.538431 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.538435 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.538438 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.538442 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.538446 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.538449 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.538453 | orchestrator | 2026-04-10 00:54:56.538505 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-10 00:54:56.538521 | orchestrator | Friday 10 April 2026 00:45:50 +0000 (0:00:00.634) 0:01:15.959 ********** 2026-04-10 00:54:56.538525 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.538534 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.538538 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.538542 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.538546 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.538550 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.538554 | orchestrator | 2026-04-10 00:54:56.538558 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-10 00:54:56.538562 | orchestrator | Friday 10 April 2026 00:45:50 +0000 (0:00:00.477) 0:01:16.437 ********** 2026-04-10 00:54:56.538566 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.538570 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.538574 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.538577 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.538588 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.538591 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.538595 | orchestrator | 2026-04-10 00:54:56.538599 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-10 00:54:56.538603 | orchestrator | Friday 10 April 2026 00:45:51 +0000 (0:00:00.656) 0:01:17.093 ********** 2026-04-10 00:54:56.538606 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.538610 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.538614 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.538618 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.538622 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.538625 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.538629 | orchestrator | 2026-04-10 00:54:56.538633 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-10 00:54:56.538637 | orchestrator | Friday 10 April 2026 00:45:52 +0000 (0:00:01.069) 0:01:18.163 ********** 2026-04-10 00:54:56.538641 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:54:56.538645 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:54:56.538648 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:54:56.538652 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:54:56.538656 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:54:56.538660 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:54:56.538664 | orchestrator | 2026-04-10 00:54:56.538668 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-10 00:54:56.538671 | orchestrator | Friday 10 April 2026 00:45:53 +0000 (0:00:01.414) 0:01:19.577 ********** 2026-04-10 00:54:56.538675 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:54:56.538679 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:54:56.538683 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:54:56.538687 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:54:56.538691 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:54:56.538694 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:54:56.538698 | orchestrator | 2026-04-10 00:54:56.538702 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-10 00:54:56.538706 | orchestrator | Friday 10 April 2026 00:45:56 +0000 (0:00:02.099) 0:01:21.677 ********** 2026-04-10 00:54:56.538711 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:54:56.538715 | orchestrator | 2026-04-10 00:54:56.538719 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-10 00:54:56.538724 | orchestrator | Friday 10 April 2026 00:45:57 +0000 (0:00:01.047) 0:01:22.725 ********** 2026-04-10 00:54:56.538728 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.538731 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.538735 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.538739 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.538743 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.538746 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.538750 | orchestrator | 2026-04-10 00:54:56.538754 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-10 00:54:56.538758 | orchestrator | Friday 10 April 2026 00:45:57 +0000 (0:00:00.555) 0:01:23.281 ********** 2026-04-10 00:54:56.538762 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.538766 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.538769 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.538773 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.538777 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.538781 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.538785 | orchestrator | 2026-04-10 00:54:56.538788 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-10 00:54:56.538800 | orchestrator | Friday 10 April 2026 00:45:58 +0000 (0:00:00.681) 0:01:23.962 ********** 2026-04-10 00:54:56.538804 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-10 00:54:56.538811 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-10 00:54:56.538815 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-10 00:54:56.538819 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-10 00:54:56.538823 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-10 00:54:56.538827 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-10 00:54:56.538830 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-10 00:54:56.538834 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-10 00:54:56.538839 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-10 00:54:56.538842 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-10 00:54:56.538846 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-10 00:54:56.538867 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-10 00:54:56.538872 | orchestrator | 2026-04-10 00:54:56.538876 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-10 00:54:56.538879 | orchestrator | Friday 10 April 2026 00:45:59 +0000 (0:00:01.300) 0:01:25.262 ********** 2026-04-10 00:54:56.538883 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:54:56.538887 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:54:56.538891 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:54:56.538895 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:54:56.538899 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:54:56.538902 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:54:56.538906 | orchestrator | 2026-04-10 00:54:56.538910 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-10 00:54:56.538914 | orchestrator | Friday 10 April 2026 00:46:00 +0000 (0:00:01.160) 0:01:26.424 ********** 2026-04-10 00:54:56.538918 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.538922 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.538926 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.538930 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.538933 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.538937 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.538941 | orchestrator | 2026-04-10 00:54:56.538944 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-10 00:54:56.538948 | orchestrator | Friday 10 April 2026 00:46:01 +0000 (0:00:00.518) 0:01:26.942 ********** 2026-04-10 00:54:56.538952 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.538956 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.538959 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.538963 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.538967 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.538971 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.538974 | orchestrator | 2026-04-10 00:54:56.538978 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-10 00:54:56.538982 | orchestrator | Friday 10 April 2026 00:46:02 +0000 (0:00:00.693) 0:01:27.635 ********** 2026-04-10 00:54:56.538986 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.538990 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.538994 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.538997 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.539001 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.539005 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.539009 | orchestrator | 2026-04-10 00:54:56.539013 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-10 00:54:56.539021 | orchestrator | Friday 10 April 2026 00:46:02 +0000 (0:00:00.505) 0:01:28.141 ********** 2026-04-10 00:54:56.539025 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:54:56.539029 | orchestrator | 2026-04-10 00:54:56.539033 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-10 00:54:56.539037 | orchestrator | Friday 10 April 2026 00:46:03 +0000 (0:00:01.009) 0:01:29.151 ********** 2026-04-10 00:54:56.539041 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.539044 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.539049 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.539052 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.539056 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.539060 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.539064 | orchestrator | 2026-04-10 00:54:56.539067 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-10 00:54:56.539071 | orchestrator | Friday 10 April 2026 00:47:25 +0000 (0:01:22.421) 0:02:51.572 ********** 2026-04-10 00:54:56.539075 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-10 00:54:56.539079 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-10 00:54:56.539082 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-10 00:54:56.539086 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.539090 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-10 00:54:56.539094 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-10 00:54:56.539100 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-10 00:54:56.539104 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.539108 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-10 00:54:56.539111 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-10 00:54:56.539115 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-10 00:54:56.539119 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.539123 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-10 00:54:56.539127 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-10 00:54:56.539130 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-10 00:54:56.539134 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.539138 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-10 00:54:56.539142 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-10 00:54:56.539146 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-10 00:54:56.539149 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.539153 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-10 00:54:56.539169 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-10 00:54:56.539174 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-10 00:54:56.539178 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.539182 | orchestrator | 2026-04-10 00:54:56.539185 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-10 00:54:56.539189 | orchestrator | Friday 10 April 2026 00:47:26 +0000 (0:00:00.673) 0:02:52.246 ********** 2026-04-10 00:54:56.539193 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.539197 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.539201 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.539217 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.539221 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.539225 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.539229 | orchestrator | 2026-04-10 00:54:56.539233 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-10 00:54:56.539236 | orchestrator | Friday 10 April 2026 00:47:27 +0000 (0:00:00.762) 0:02:53.008 ********** 2026-04-10 00:54:56.539240 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.539244 | orchestrator | 2026-04-10 00:54:56.539248 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-10 00:54:56.539278 | orchestrator | Friday 10 April 2026 00:47:27 +0000 (0:00:00.196) 0:02:53.205 ********** 2026-04-10 00:54:56.539285 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.539290 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.539294 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.539297 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.539301 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.539305 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.539309 | orchestrator | 2026-04-10 00:54:56.539313 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-10 00:54:56.539317 | orchestrator | Friday 10 April 2026 00:47:28 +0000 (0:00:01.005) 0:02:54.211 ********** 2026-04-10 00:54:56.539320 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.539324 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.539328 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.539332 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.539335 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.539339 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.539343 | orchestrator | 2026-04-10 00:54:56.539347 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-10 00:54:56.539351 | orchestrator | Friday 10 April 2026 00:47:29 +0000 (0:00:00.779) 0:02:54.991 ********** 2026-04-10 00:54:56.539355 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.539358 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.539362 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.539366 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.539370 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.539374 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.539378 | orchestrator | 2026-04-10 00:54:56.539382 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-10 00:54:56.539385 | orchestrator | Friday 10 April 2026 00:47:30 +0000 (0:00:01.172) 0:02:56.163 ********** 2026-04-10 00:54:56.539389 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.539393 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.539397 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.539401 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.539404 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.539408 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.539412 | orchestrator | 2026-04-10 00:54:56.539416 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-10 00:54:56.539419 | orchestrator | Friday 10 April 2026 00:47:33 +0000 (0:00:02.777) 0:02:58.941 ********** 2026-04-10 00:54:56.539423 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.539427 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.539431 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.539434 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.539438 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.539442 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.539446 | orchestrator | 2026-04-10 00:54:56.539450 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-10 00:54:56.539453 | orchestrator | Friday 10 April 2026 00:47:33 +0000 (0:00:00.523) 0:02:59.464 ********** 2026-04-10 00:54:56.539458 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:54:56.539468 | orchestrator | 2026-04-10 00:54:56.539472 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-10 00:54:56.539479 | orchestrator | Friday 10 April 2026 00:47:34 +0000 (0:00:01.109) 0:03:00.573 ********** 2026-04-10 00:54:56.539483 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.539487 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.539491 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.539494 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.539498 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.539502 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.539506 | orchestrator | 2026-04-10 00:54:56.539509 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-10 00:54:56.539513 | orchestrator | Friday 10 April 2026 00:47:35 +0000 (0:00:00.647) 0:03:01.221 ********** 2026-04-10 00:54:56.539517 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.539521 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.539525 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.539528 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.539532 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.539536 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.539540 | orchestrator | 2026-04-10 00:54:56.539543 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-10 00:54:56.539547 | orchestrator | Friday 10 April 2026 00:47:36 +0000 (0:00:00.755) 0:03:01.976 ********** 2026-04-10 00:54:56.539551 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.539555 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.539559 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.539563 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.539567 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.539586 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.539590 | orchestrator | 2026-04-10 00:54:56.539594 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-10 00:54:56.539598 | orchestrator | Friday 10 April 2026 00:47:37 +0000 (0:00:00.709) 0:03:02.686 ********** 2026-04-10 00:54:56.539602 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.539606 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.539610 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.539613 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.539617 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.539621 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.539625 | orchestrator | 2026-04-10 00:54:56.539629 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-10 00:54:56.539632 | orchestrator | Friday 10 April 2026 00:47:38 +0000 (0:00:00.950) 0:03:03.636 ********** 2026-04-10 00:54:56.539636 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.539640 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.539644 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.539648 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.539651 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.539655 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.539659 | orchestrator | 2026-04-10 00:54:56.539663 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-10 00:54:56.539666 | orchestrator | Friday 10 April 2026 00:47:38 +0000 (0:00:00.641) 0:03:04.278 ********** 2026-04-10 00:54:56.539670 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.539674 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.539678 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.539682 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.539685 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.539689 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.539693 | orchestrator | 2026-04-10 00:54:56.539697 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-10 00:54:56.539704 | orchestrator | Friday 10 April 2026 00:47:39 +0000 (0:00:00.845) 0:03:05.123 ********** 2026-04-10 00:54:56.539708 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.539712 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.539716 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.539719 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.539723 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.539727 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.539731 | orchestrator | 2026-04-10 00:54:56.539734 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-10 00:54:56.539738 | orchestrator | Friday 10 April 2026 00:47:40 +0000 (0:00:00.614) 0:03:05.737 ********** 2026-04-10 00:54:56.539742 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.539746 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.539750 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.539754 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.539757 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.539761 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.539765 | orchestrator | 2026-04-10 00:54:56.539769 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-10 00:54:56.539772 | orchestrator | Friday 10 April 2026 00:47:41 +0000 (0:00:00.900) 0:03:06.638 ********** 2026-04-10 00:54:56.539776 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.539780 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.539784 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.539788 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.539791 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.539795 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.539799 | orchestrator | 2026-04-10 00:54:56.539803 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-10 00:54:56.539806 | orchestrator | Friday 10 April 2026 00:47:42 +0000 (0:00:01.322) 0:03:07.961 ********** 2026-04-10 00:54:56.539810 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:54:56.539814 | orchestrator | 2026-04-10 00:54:56.539818 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-10 00:54:56.539822 | orchestrator | Friday 10 April 2026 00:47:43 +0000 (0:00:01.117) 0:03:09.078 ********** 2026-04-10 00:54:56.539826 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-04-10 00:54:56.539830 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-04-10 00:54:56.539833 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-04-10 00:54:56.539840 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-04-10 00:54:56.539845 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-10 00:54:56.539848 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-04-10 00:54:56.539852 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-10 00:54:56.539856 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-04-10 00:54:56.539860 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-10 00:54:56.539864 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-10 00:54:56.539867 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-10 00:54:56.539871 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-10 00:54:56.539875 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-10 00:54:56.539879 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-10 00:54:56.539882 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-10 00:54:56.539886 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-10 00:54:56.539890 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-10 00:54:56.539894 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-10 00:54:56.539901 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-10 00:54:56.539905 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-10 00:54:56.539922 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-10 00:54:56.539927 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-10 00:54:56.539930 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-10 00:54:56.539934 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-10 00:54:56.539938 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-10 00:54:56.539942 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-10 00:54:56.539945 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-10 00:54:56.539949 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-10 00:54:56.539953 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-10 00:54:56.539956 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-10 00:54:56.539960 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-10 00:54:56.539964 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-10 00:54:56.539967 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-10 00:54:56.539971 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-10 00:54:56.539975 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-10 00:54:56.539979 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-10 00:54:56.539982 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-10 00:54:56.539986 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-10 00:54:56.539990 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-10 00:54:56.539994 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-10 00:54:56.539998 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-10 00:54:56.540001 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-10 00:54:56.540005 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-10 00:54:56.540009 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-10 00:54:56.540013 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-10 00:54:56.540016 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-10 00:54:56.540020 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-10 00:54:56.540024 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-10 00:54:56.540028 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-10 00:54:56.540032 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-10 00:54:56.540035 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-10 00:54:56.540039 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-10 00:54:56.540043 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-10 00:54:56.540047 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-10 00:54:56.540050 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-10 00:54:56.540054 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-10 00:54:56.540058 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-10 00:54:56.540061 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-10 00:54:56.540065 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-10 00:54:56.540069 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-10 00:54:56.540076 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-10 00:54:56.540080 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-10 00:54:56.540083 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-10 00:54:56.540087 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-10 00:54:56.540094 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-10 00:54:56.540098 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-10 00:54:56.540101 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-10 00:54:56.540105 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-10 00:54:56.540109 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-10 00:54:56.540113 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-10 00:54:56.540116 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-10 00:54:56.540120 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-10 00:54:56.540124 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-10 00:54:56.540128 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-10 00:54:56.540131 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-10 00:54:56.540135 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-10 00:54:56.540139 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-10 00:54:56.540143 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-10 00:54:56.540161 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-10 00:54:56.540165 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-10 00:54:56.540169 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-10 00:54:56.540173 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-10 00:54:56.540176 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-10 00:54:56.540180 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-04-10 00:54:56.540184 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-04-10 00:54:56.540188 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-04-10 00:54:56.540191 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-04-10 00:54:56.540195 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-04-10 00:54:56.540199 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-10 00:54:56.540203 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-04-10 00:54:56.540207 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-04-10 00:54:56.540211 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-04-10 00:54:56.540222 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-04-10 00:54:56.540226 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-04-10 00:54:56.540230 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-04-10 00:54:56.540241 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-04-10 00:54:56.540245 | orchestrator | 2026-04-10 00:54:56.540260 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-10 00:54:56.540264 | orchestrator | Friday 10 April 2026 00:47:51 +0000 (0:00:07.669) 0:03:16.748 ********** 2026-04-10 00:54:56.540268 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.540272 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.540276 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.540288 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:54:56.540293 | orchestrator | 2026-04-10 00:54:56.540297 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-10 00:54:56.540300 | orchestrator | Friday 10 April 2026 00:47:52 +0000 (0:00:00.898) 0:03:17.647 ********** 2026-04-10 00:54:56.540304 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-10 00:54:56.540308 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-10 00:54:56.540313 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-10 00:54:56.540316 | orchestrator | 2026-04-10 00:54:56.540320 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-10 00:54:56.540324 | orchestrator | Friday 10 April 2026 00:47:53 +0000 (0:00:00.975) 0:03:18.622 ********** 2026-04-10 00:54:56.540328 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-10 00:54:56.540332 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-10 00:54:56.540335 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-10 00:54:56.540339 | orchestrator | 2026-04-10 00:54:56.540343 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-10 00:54:56.540347 | orchestrator | Friday 10 April 2026 00:47:54 +0000 (0:00:01.443) 0:03:20.065 ********** 2026-04-10 00:54:56.540351 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.540354 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.540362 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.540366 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.540369 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.540373 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.540377 | orchestrator | 2026-04-10 00:54:56.540381 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-10 00:54:56.540385 | orchestrator | Friday 10 April 2026 00:47:55 +0000 (0:00:00.566) 0:03:20.632 ********** 2026-04-10 00:54:56.540389 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.540393 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.540396 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.540400 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.540404 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.540408 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.540411 | orchestrator | 2026-04-10 00:54:56.540415 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-10 00:54:56.540419 | orchestrator | Friday 10 April 2026 00:47:56 +0000 (0:00:01.018) 0:03:21.651 ********** 2026-04-10 00:54:56.540423 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.540427 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.540431 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.540435 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.540438 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.540442 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.540446 | orchestrator | 2026-04-10 00:54:56.540450 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-10 00:54:56.540454 | orchestrator | Friday 10 April 2026 00:47:56 +0000 (0:00:00.549) 0:03:22.200 ********** 2026-04-10 00:54:56.540473 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.540477 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.540481 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.540485 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.540492 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.540496 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.540500 | orchestrator | 2026-04-10 00:54:56.540503 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-10 00:54:56.540507 | orchestrator | Friday 10 April 2026 00:47:57 +0000 (0:00:00.657) 0:03:22.858 ********** 2026-04-10 00:54:56.540511 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.540515 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.540519 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.540522 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.540526 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.540530 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.540533 | orchestrator | 2026-04-10 00:54:56.540537 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-10 00:54:56.540541 | orchestrator | Friday 10 April 2026 00:47:57 +0000 (0:00:00.548) 0:03:23.406 ********** 2026-04-10 00:54:56.540545 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.540548 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.540552 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.540556 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.540559 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.540563 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.540567 | orchestrator | 2026-04-10 00:54:56.540571 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-10 00:54:56.540575 | orchestrator | Friday 10 April 2026 00:47:58 +0000 (0:00:00.580) 0:03:23.987 ********** 2026-04-10 00:54:56.540578 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.540582 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.540586 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.540590 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.540594 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.540598 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.540601 | orchestrator | 2026-04-10 00:54:56.540605 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-10 00:54:56.540609 | orchestrator | Friday 10 April 2026 00:47:59 +0000 (0:00:00.724) 0:03:24.711 ********** 2026-04-10 00:54:56.540613 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.540616 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.540620 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.540624 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.540628 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.540632 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.540635 | orchestrator | 2026-04-10 00:54:56.540639 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-10 00:54:56.540643 | orchestrator | Friday 10 April 2026 00:47:59 +0000 (0:00:00.677) 0:03:25.389 ********** 2026-04-10 00:54:56.540647 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.540650 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.540654 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.540658 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.540662 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.540665 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.540669 | orchestrator | 2026-04-10 00:54:56.540673 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-10 00:54:56.540677 | orchestrator | Friday 10 April 2026 00:48:01 +0000 (0:00:02.086) 0:03:27.475 ********** 2026-04-10 00:54:56.540681 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.540684 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.540688 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.540692 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.540696 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.540705 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.540709 | orchestrator | 2026-04-10 00:54:56.540712 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-10 00:54:56.540716 | orchestrator | Friday 10 April 2026 00:48:02 +0000 (0:00:00.508) 0:03:27.984 ********** 2026-04-10 00:54:56.540720 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.540724 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.540727 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.540731 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.540735 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.540739 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.540742 | orchestrator | 2026-04-10 00:54:56.540749 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-10 00:54:56.540753 | orchestrator | Friday 10 April 2026 00:48:03 +0000 (0:00:00.764) 0:03:28.749 ********** 2026-04-10 00:54:56.540757 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.540761 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.540765 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.540768 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.540772 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.540776 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.540780 | orchestrator | 2026-04-10 00:54:56.540783 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-10 00:54:56.540787 | orchestrator | Friday 10 April 2026 00:48:03 +0000 (0:00:00.474) 0:03:29.224 ********** 2026-04-10 00:54:56.540791 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.540795 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.540798 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.540802 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-10 00:54:56.540806 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-10 00:54:56.540810 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-10 00:54:56.540814 | orchestrator | 2026-04-10 00:54:56.540834 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-10 00:54:56.540839 | orchestrator | Friday 10 April 2026 00:48:04 +0000 (0:00:00.839) 0:03:30.064 ********** 2026-04-10 00:54:56.540842 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.540846 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.540850 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.540856 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-04-10 00:54:56.540862 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-04-10 00:54:56.540867 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.540871 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-04-10 00:54:56.540875 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-04-10 00:54:56.540881 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.540885 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-04-10 00:54:56.540889 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-04-10 00:54:56.540893 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.540897 | orchestrator | 2026-04-10 00:54:56.540901 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-10 00:54:56.540905 | orchestrator | Friday 10 April 2026 00:48:05 +0000 (0:00:00.586) 0:03:30.650 ********** 2026-04-10 00:54:56.540909 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.540913 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.540916 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.540920 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.540924 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.540927 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.540931 | orchestrator | 2026-04-10 00:54:56.540935 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-10 00:54:56.540939 | orchestrator | Friday 10 April 2026 00:48:05 +0000 (0:00:00.721) 0:03:31.371 ********** 2026-04-10 00:54:56.540943 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.540946 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.540950 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.540954 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.540958 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.540965 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.540970 | orchestrator | 2026-04-10 00:54:56.540973 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-10 00:54:56.540977 | orchestrator | Friday 10 April 2026 00:48:06 +0000 (0:00:00.482) 0:03:31.854 ********** 2026-04-10 00:54:56.540981 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.540985 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.540989 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.540993 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.540996 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.541000 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.541004 | orchestrator | 2026-04-10 00:54:56.541008 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-10 00:54:56.541011 | orchestrator | Friday 10 April 2026 00:48:07 +0000 (0:00:00.911) 0:03:32.766 ********** 2026-04-10 00:54:56.541015 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.541019 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.541022 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.541026 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.541030 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.541033 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.541037 | orchestrator | 2026-04-10 00:54:56.541041 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-10 00:54:56.541045 | orchestrator | Friday 10 April 2026 00:48:07 +0000 (0:00:00.652) 0:03:33.418 ********** 2026-04-10 00:54:56.541048 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.541066 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.541070 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.541074 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.541081 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.541085 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.541089 | orchestrator | 2026-04-10 00:54:56.541093 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-10 00:54:56.541097 | orchestrator | Friday 10 April 2026 00:48:08 +0000 (0:00:00.736) 0:03:34.155 ********** 2026-04-10 00:54:56.541100 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.541104 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.541108 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.541111 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.541115 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.541119 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.541122 | orchestrator | 2026-04-10 00:54:56.541126 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-10 00:54:56.541130 | orchestrator | Friday 10 April 2026 00:48:09 +0000 (0:00:00.596) 0:03:34.751 ********** 2026-04-10 00:54:56.541134 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-10 00:54:56.541138 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-10 00:54:56.541142 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-10 00:54:56.541145 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.541149 | orchestrator | 2026-04-10 00:54:56.541153 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-10 00:54:56.541157 | orchestrator | Friday 10 April 2026 00:48:09 +0000 (0:00:00.523) 0:03:35.274 ********** 2026-04-10 00:54:56.541161 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-10 00:54:56.541164 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-10 00:54:56.541168 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-10 00:54:56.541172 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.541176 | orchestrator | 2026-04-10 00:54:56.541180 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-10 00:54:56.541183 | orchestrator | Friday 10 April 2026 00:48:10 +0000 (0:00:00.782) 0:03:36.057 ********** 2026-04-10 00:54:56.541187 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-10 00:54:56.541191 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-10 00:54:56.541195 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-10 00:54:56.541198 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.541202 | orchestrator | 2026-04-10 00:54:56.541206 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-10 00:54:56.541210 | orchestrator | Friday 10 April 2026 00:48:10 +0000 (0:00:00.357) 0:03:36.415 ********** 2026-04-10 00:54:56.541213 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.541217 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.541221 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.541225 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.541228 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.541232 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.541236 | orchestrator | 2026-04-10 00:54:56.541240 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-10 00:54:56.541244 | orchestrator | Friday 10 April 2026 00:48:11 +0000 (0:00:00.568) 0:03:36.983 ********** 2026-04-10 00:54:56.541248 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-10 00:54:56.541267 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.541271 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-10 00:54:56.541275 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.541279 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-10 00:54:56.541283 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.541287 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-10 00:54:56.541291 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-10 00:54:56.541294 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-10 00:54:56.541302 | orchestrator | 2026-04-10 00:54:56.541306 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-10 00:54:56.541309 | orchestrator | Friday 10 April 2026 00:48:12 +0000 (0:00:01.586) 0:03:38.570 ********** 2026-04-10 00:54:56.541313 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:54:56.541317 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:54:56.541321 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:54:56.541325 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:54:56.541329 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:54:56.541335 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:54:56.541339 | orchestrator | 2026-04-10 00:54:56.541343 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-10 00:54:56.541347 | orchestrator | Friday 10 April 2026 00:48:15 +0000 (0:00:02.609) 0:03:41.179 ********** 2026-04-10 00:54:56.541350 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:54:56.541354 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:54:56.541358 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:54:56.541362 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:54:56.541365 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:54:56.541369 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:54:56.541373 | orchestrator | 2026-04-10 00:54:56.541377 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-10 00:54:56.541380 | orchestrator | Friday 10 April 2026 00:48:16 +0000 (0:00:00.993) 0:03:42.173 ********** 2026-04-10 00:54:56.541384 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.541388 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.541392 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.541396 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:54:56.541400 | orchestrator | 2026-04-10 00:54:56.541405 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-10 00:54:56.541409 | orchestrator | Friday 10 April 2026 00:48:17 +0000 (0:00:00.762) 0:03:42.936 ********** 2026-04-10 00:54:56.541412 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.541416 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.541420 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.541439 | orchestrator | 2026-04-10 00:54:56.541444 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-10 00:54:56.541448 | orchestrator | Friday 10 April 2026 00:48:17 +0000 (0:00:00.248) 0:03:43.185 ********** 2026-04-10 00:54:56.541452 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:54:56.541456 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:54:56.541459 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:54:56.541463 | orchestrator | 2026-04-10 00:54:56.541467 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-10 00:54:56.541471 | orchestrator | Friday 10 April 2026 00:48:18 +0000 (0:00:01.064) 0:03:44.250 ********** 2026-04-10 00:54:56.541474 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-10 00:54:56.541478 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-10 00:54:56.541482 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-10 00:54:56.541486 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.541490 | orchestrator | 2026-04-10 00:54:56.541494 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-10 00:54:56.541497 | orchestrator | Friday 10 April 2026 00:48:19 +0000 (0:00:00.721) 0:03:44.971 ********** 2026-04-10 00:54:56.541501 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.541505 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.541509 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.541512 | orchestrator | 2026-04-10 00:54:56.541516 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-10 00:54:56.541520 | orchestrator | Friday 10 April 2026 00:48:19 +0000 (0:00:00.414) 0:03:45.386 ********** 2026-04-10 00:54:56.541524 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.541530 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.541534 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.541538 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:54:56.541542 | orchestrator | 2026-04-10 00:54:56.541546 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-10 00:54:56.541549 | orchestrator | Friday 10 April 2026 00:48:20 +0000 (0:00:00.699) 0:03:46.085 ********** 2026-04-10 00:54:56.541553 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-10 00:54:56.541557 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-10 00:54:56.541561 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-10 00:54:56.541565 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.541568 | orchestrator | 2026-04-10 00:54:56.541572 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-10 00:54:56.541576 | orchestrator | Friday 10 April 2026 00:48:20 +0000 (0:00:00.517) 0:03:46.602 ********** 2026-04-10 00:54:56.541580 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.541583 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.541587 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.541591 | orchestrator | 2026-04-10 00:54:56.541595 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-10 00:54:56.541598 | orchestrator | Friday 10 April 2026 00:48:21 +0000 (0:00:00.426) 0:03:47.029 ********** 2026-04-10 00:54:56.541602 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.541606 | orchestrator | 2026-04-10 00:54:56.541610 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-10 00:54:56.541613 | orchestrator | Friday 10 April 2026 00:48:21 +0000 (0:00:00.254) 0:03:47.283 ********** 2026-04-10 00:54:56.541617 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.541621 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.541625 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.541628 | orchestrator | 2026-04-10 00:54:56.541632 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-10 00:54:56.541636 | orchestrator | Friday 10 April 2026 00:48:21 +0000 (0:00:00.253) 0:03:47.537 ********** 2026-04-10 00:54:56.541640 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.541643 | orchestrator | 2026-04-10 00:54:56.541647 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-10 00:54:56.541651 | orchestrator | Friday 10 April 2026 00:48:22 +0000 (0:00:00.147) 0:03:47.684 ********** 2026-04-10 00:54:56.541655 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.541658 | orchestrator | 2026-04-10 00:54:56.541662 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-10 00:54:56.541668 | orchestrator | Friday 10 April 2026 00:48:22 +0000 (0:00:00.189) 0:03:47.873 ********** 2026-04-10 00:54:56.541672 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.541676 | orchestrator | 2026-04-10 00:54:56.541679 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-10 00:54:56.541683 | orchestrator | Friday 10 April 2026 00:48:22 +0000 (0:00:00.089) 0:03:47.963 ********** 2026-04-10 00:54:56.541687 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.541690 | orchestrator | 2026-04-10 00:54:56.541694 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-10 00:54:56.541698 | orchestrator | Friday 10 April 2026 00:48:22 +0000 (0:00:00.173) 0:03:48.137 ********** 2026-04-10 00:54:56.541702 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.541706 | orchestrator | 2026-04-10 00:54:56.541709 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-10 00:54:56.541713 | orchestrator | Friday 10 April 2026 00:48:22 +0000 (0:00:00.185) 0:03:48.322 ********** 2026-04-10 00:54:56.541717 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-10 00:54:56.541721 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-10 00:54:56.541728 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-10 00:54:56.541732 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.541736 | orchestrator | 2026-04-10 00:54:56.541739 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-10 00:54:56.541743 | orchestrator | Friday 10 April 2026 00:48:23 +0000 (0:00:00.601) 0:03:48.923 ********** 2026-04-10 00:54:56.541747 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.541764 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.541769 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.541772 | orchestrator | 2026-04-10 00:54:56.541776 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-10 00:54:56.541780 | orchestrator | Friday 10 April 2026 00:48:23 +0000 (0:00:00.454) 0:03:49.378 ********** 2026-04-10 00:54:56.541784 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.541788 | orchestrator | 2026-04-10 00:54:56.541792 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-10 00:54:56.541796 | orchestrator | Friday 10 April 2026 00:48:23 +0000 (0:00:00.196) 0:03:49.575 ********** 2026-04-10 00:54:56.541800 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.541803 | orchestrator | 2026-04-10 00:54:56.541807 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-10 00:54:56.541811 | orchestrator | Friday 10 April 2026 00:48:24 +0000 (0:00:00.183) 0:03:49.758 ********** 2026-04-10 00:54:56.541815 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.541818 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.541822 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.541826 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:54:56.541830 | orchestrator | 2026-04-10 00:54:56.541834 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-10 00:54:56.541838 | orchestrator | Friday 10 April 2026 00:48:24 +0000 (0:00:00.805) 0:03:50.563 ********** 2026-04-10 00:54:56.541842 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.541845 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.541849 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.541853 | orchestrator | 2026-04-10 00:54:56.541857 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-10 00:54:56.541861 | orchestrator | Friday 10 April 2026 00:48:25 +0000 (0:00:00.275) 0:03:50.839 ********** 2026-04-10 00:54:56.541865 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:54:56.541868 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:54:56.541872 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:54:56.541876 | orchestrator | 2026-04-10 00:54:56.541880 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-10 00:54:56.541884 | orchestrator | Friday 10 April 2026 00:48:26 +0000 (0:00:01.003) 0:03:51.842 ********** 2026-04-10 00:54:56.541888 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-10 00:54:56.541891 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-10 00:54:56.541895 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-10 00:54:56.541899 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.541902 | orchestrator | 2026-04-10 00:54:56.541906 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-10 00:54:56.541910 | orchestrator | Friday 10 April 2026 00:48:26 +0000 (0:00:00.685) 0:03:52.527 ********** 2026-04-10 00:54:56.541914 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.541917 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.541921 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.541925 | orchestrator | 2026-04-10 00:54:56.541928 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-10 00:54:56.541932 | orchestrator | Friday 10 April 2026 00:48:27 +0000 (0:00:00.244) 0:03:52.772 ********** 2026-04-10 00:54:56.541936 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.541944 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.541948 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.541952 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:54:56.541956 | orchestrator | 2026-04-10 00:54:56.541959 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-10 00:54:56.541963 | orchestrator | Friday 10 April 2026 00:48:28 +0000 (0:00:00.853) 0:03:53.625 ********** 2026-04-10 00:54:56.541967 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.541971 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.541975 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.541978 | orchestrator | 2026-04-10 00:54:56.541982 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-10 00:54:56.541986 | orchestrator | Friday 10 April 2026 00:48:28 +0000 (0:00:00.275) 0:03:53.901 ********** 2026-04-10 00:54:56.541990 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:54:56.541993 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:54:56.541997 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:54:56.542001 | orchestrator | 2026-04-10 00:54:56.542007 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-10 00:54:56.542011 | orchestrator | Friday 10 April 2026 00:48:29 +0000 (0:00:01.308) 0:03:55.209 ********** 2026-04-10 00:54:56.542042 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-10 00:54:56.542046 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-10 00:54:56.542049 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-10 00:54:56.542053 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.542057 | orchestrator | 2026-04-10 00:54:56.542061 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-10 00:54:56.542064 | orchestrator | Friday 10 April 2026 00:48:30 +0000 (0:00:00.636) 0:03:55.845 ********** 2026-04-10 00:54:56.542068 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.542072 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.542076 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.542080 | orchestrator | 2026-04-10 00:54:56.542084 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-10 00:54:56.542087 | orchestrator | Friday 10 April 2026 00:48:30 +0000 (0:00:00.386) 0:03:56.232 ********** 2026-04-10 00:54:56.542091 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.542095 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.542099 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.542102 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.542106 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.542110 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.542114 | orchestrator | 2026-04-10 00:54:56.542132 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-10 00:54:56.542137 | orchestrator | Friday 10 April 2026 00:48:31 +0000 (0:00:00.706) 0:03:56.938 ********** 2026-04-10 00:54:56.542141 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.542145 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.542148 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.542152 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:54:56.542156 | orchestrator | 2026-04-10 00:54:56.542160 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-10 00:54:56.542163 | orchestrator | Friday 10 April 2026 00:48:32 +0000 (0:00:01.231) 0:03:58.170 ********** 2026-04-10 00:54:56.542167 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.542171 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.542174 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.542178 | orchestrator | 2026-04-10 00:54:56.542182 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-10 00:54:56.542186 | orchestrator | Friday 10 April 2026 00:48:32 +0000 (0:00:00.362) 0:03:58.533 ********** 2026-04-10 00:54:56.542193 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:54:56.542197 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:54:56.542201 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:54:56.542205 | orchestrator | 2026-04-10 00:54:56.542209 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-10 00:54:56.542213 | orchestrator | Friday 10 April 2026 00:48:34 +0000 (0:00:01.571) 0:04:00.104 ********** 2026-04-10 00:54:56.542216 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-10 00:54:56.542220 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-10 00:54:56.542224 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-10 00:54:56.542228 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.542232 | orchestrator | 2026-04-10 00:54:56.542236 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-10 00:54:56.542239 | orchestrator | Friday 10 April 2026 00:48:35 +0000 (0:00:00.733) 0:04:00.837 ********** 2026-04-10 00:54:56.542243 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.542247 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.542286 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.542290 | orchestrator | 2026-04-10 00:54:56.542294 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-04-10 00:54:56.542298 | orchestrator | 2026-04-10 00:54:56.542301 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-10 00:54:56.542305 | orchestrator | Friday 10 April 2026 00:48:35 +0000 (0:00:00.574) 0:04:01.412 ********** 2026-04-10 00:54:56.542309 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:54:56.542313 | orchestrator | 2026-04-10 00:54:56.542317 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-10 00:54:56.542321 | orchestrator | Friday 10 April 2026 00:48:36 +0000 (0:00:00.928) 0:04:02.340 ********** 2026-04-10 00:54:56.542325 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:54:56.542328 | orchestrator | 2026-04-10 00:54:56.542332 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-10 00:54:56.542336 | orchestrator | Friday 10 April 2026 00:48:37 +0000 (0:00:00.999) 0:04:03.339 ********** 2026-04-10 00:54:56.542340 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.542344 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.542347 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.542351 | orchestrator | 2026-04-10 00:54:56.542355 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-10 00:54:56.542359 | orchestrator | Friday 10 April 2026 00:48:38 +0000 (0:00:00.909) 0:04:04.248 ********** 2026-04-10 00:54:56.542363 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.542366 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.542370 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.542374 | orchestrator | 2026-04-10 00:54:56.542378 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-10 00:54:56.542382 | orchestrator | Friday 10 April 2026 00:48:39 +0000 (0:00:00.643) 0:04:04.892 ********** 2026-04-10 00:54:56.542385 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.542389 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.542393 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.542397 | orchestrator | 2026-04-10 00:54:56.542404 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-10 00:54:56.542408 | orchestrator | Friday 10 April 2026 00:48:39 +0000 (0:00:00.348) 0:04:05.240 ********** 2026-04-10 00:54:56.542412 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.542415 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.542419 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.542423 | orchestrator | 2026-04-10 00:54:56.542427 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-10 00:54:56.542434 | orchestrator | Friday 10 April 2026 00:48:39 +0000 (0:00:00.342) 0:04:05.583 ********** 2026-04-10 00:54:56.542438 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.542442 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.542446 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.542450 | orchestrator | 2026-04-10 00:54:56.542454 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-10 00:54:56.542457 | orchestrator | Friday 10 April 2026 00:48:40 +0000 (0:00:00.750) 0:04:06.334 ********** 2026-04-10 00:54:56.542461 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.542465 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.542469 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.542472 | orchestrator | 2026-04-10 00:54:56.542476 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-10 00:54:56.542480 | orchestrator | Friday 10 April 2026 00:48:41 +0000 (0:00:00.607) 0:04:06.942 ********** 2026-04-10 00:54:56.542484 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.542488 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.542492 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.542495 | orchestrator | 2026-04-10 00:54:56.542513 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-10 00:54:56.542518 | orchestrator | Friday 10 April 2026 00:48:41 +0000 (0:00:00.319) 0:04:07.261 ********** 2026-04-10 00:54:56.542521 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.542525 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.542529 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.542533 | orchestrator | 2026-04-10 00:54:56.542537 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-10 00:54:56.542540 | orchestrator | Friday 10 April 2026 00:48:42 +0000 (0:00:00.686) 0:04:07.948 ********** 2026-04-10 00:54:56.542544 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.542548 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.542552 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.542555 | orchestrator | 2026-04-10 00:54:56.542559 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-10 00:54:56.542563 | orchestrator | Friday 10 April 2026 00:48:43 +0000 (0:00:00.670) 0:04:08.618 ********** 2026-04-10 00:54:56.542566 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.542570 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.542574 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.542578 | orchestrator | 2026-04-10 00:54:56.542581 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-10 00:54:56.542585 | orchestrator | Friday 10 April 2026 00:48:43 +0000 (0:00:00.581) 0:04:09.200 ********** 2026-04-10 00:54:56.542589 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.542593 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.542596 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.542600 | orchestrator | 2026-04-10 00:54:56.542604 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-10 00:54:56.542608 | orchestrator | Friday 10 April 2026 00:48:43 +0000 (0:00:00.395) 0:04:09.595 ********** 2026-04-10 00:54:56.542611 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.542615 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.542619 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.542622 | orchestrator | 2026-04-10 00:54:56.542626 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-10 00:54:56.542630 | orchestrator | Friday 10 April 2026 00:48:44 +0000 (0:00:00.440) 0:04:10.036 ********** 2026-04-10 00:54:56.542634 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.542637 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.542641 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.542645 | orchestrator | 2026-04-10 00:54:56.542648 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-10 00:54:56.542652 | orchestrator | Friday 10 April 2026 00:48:44 +0000 (0:00:00.318) 0:04:10.355 ********** 2026-04-10 00:54:56.542661 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.542664 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.542668 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.542672 | orchestrator | 2026-04-10 00:54:56.542676 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-10 00:54:56.542680 | orchestrator | Friday 10 April 2026 00:48:45 +0000 (0:00:00.297) 0:04:10.652 ********** 2026-04-10 00:54:56.542683 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.542687 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.542691 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.542695 | orchestrator | 2026-04-10 00:54:56.542698 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-10 00:54:56.542702 | orchestrator | Friday 10 April 2026 00:48:45 +0000 (0:00:00.632) 0:04:11.284 ********** 2026-04-10 00:54:56.542706 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.542710 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.542713 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.542717 | orchestrator | 2026-04-10 00:54:56.542721 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-10 00:54:56.542724 | orchestrator | Friday 10 April 2026 00:48:45 +0000 (0:00:00.278) 0:04:11.563 ********** 2026-04-10 00:54:56.542728 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.542732 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.542736 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.542739 | orchestrator | 2026-04-10 00:54:56.542743 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-10 00:54:56.542747 | orchestrator | Friday 10 April 2026 00:48:46 +0000 (0:00:00.326) 0:04:11.889 ********** 2026-04-10 00:54:56.542751 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.542754 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.542758 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.542762 | orchestrator | 2026-04-10 00:54:56.542765 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-10 00:54:56.542772 | orchestrator | Friday 10 April 2026 00:48:46 +0000 (0:00:00.665) 0:04:12.555 ********** 2026-04-10 00:54:56.542776 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.542779 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.542783 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.542787 | orchestrator | 2026-04-10 00:54:56.542790 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-10 00:54:56.542794 | orchestrator | Friday 10 April 2026 00:48:47 +0000 (0:00:00.615) 0:04:13.171 ********** 2026-04-10 00:54:56.542798 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.542802 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.542805 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.542809 | orchestrator | 2026-04-10 00:54:56.542813 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-10 00:54:56.542817 | orchestrator | Friday 10 April 2026 00:48:47 +0000 (0:00:00.306) 0:04:13.477 ********** 2026-04-10 00:54:56.542821 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:54:56.542824 | orchestrator | 2026-04-10 00:54:56.542828 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-10 00:54:56.542832 | orchestrator | Friday 10 April 2026 00:48:48 +0000 (0:00:00.678) 0:04:14.156 ********** 2026-04-10 00:54:56.542836 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.542839 | orchestrator | 2026-04-10 00:54:56.542843 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-10 00:54:56.542861 | orchestrator | Friday 10 April 2026 00:48:48 +0000 (0:00:00.151) 0:04:14.307 ********** 2026-04-10 00:54:56.542865 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-10 00:54:56.542869 | orchestrator | 2026-04-10 00:54:56.542873 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-10 00:54:56.542883 | orchestrator | Friday 10 April 2026 00:48:49 +0000 (0:00:01.130) 0:04:15.438 ********** 2026-04-10 00:54:56.542887 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.542891 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.542895 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.542898 | orchestrator | 2026-04-10 00:54:56.542902 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-10 00:54:56.542906 | orchestrator | Friday 10 April 2026 00:48:50 +0000 (0:00:00.293) 0:04:15.731 ********** 2026-04-10 00:54:56.542910 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.542913 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.542917 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.542921 | orchestrator | 2026-04-10 00:54:56.542925 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-10 00:54:56.542928 | orchestrator | Friday 10 April 2026 00:48:50 +0000 (0:00:00.314) 0:04:16.045 ********** 2026-04-10 00:54:56.542932 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:54:56.542936 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:54:56.542940 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:54:56.542943 | orchestrator | 2026-04-10 00:54:56.542947 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-10 00:54:56.542951 | orchestrator | Friday 10 April 2026 00:48:51 +0000 (0:00:01.219) 0:04:17.265 ********** 2026-04-10 00:54:56.542954 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:54:56.542958 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:54:56.542962 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:54:56.542966 | orchestrator | 2026-04-10 00:54:56.542969 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-10 00:54:56.542973 | orchestrator | Friday 10 April 2026 00:48:52 +0000 (0:00:00.986) 0:04:18.252 ********** 2026-04-10 00:54:56.542977 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:54:56.542981 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:54:56.542984 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:54:56.542988 | orchestrator | 2026-04-10 00:54:56.542992 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-10 00:54:56.542995 | orchestrator | Friday 10 April 2026 00:48:53 +0000 (0:00:01.030) 0:04:19.283 ********** 2026-04-10 00:54:56.542999 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.543003 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.543007 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.543010 | orchestrator | 2026-04-10 00:54:56.543014 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-10 00:54:56.543018 | orchestrator | Friday 10 April 2026 00:48:54 +0000 (0:00:00.868) 0:04:20.152 ********** 2026-04-10 00:54:56.543022 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:54:56.543026 | orchestrator | 2026-04-10 00:54:56.543030 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-10 00:54:56.543033 | orchestrator | Friday 10 April 2026 00:48:56 +0000 (0:00:02.179) 0:04:22.331 ********** 2026-04-10 00:54:56.543037 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.543041 | orchestrator | 2026-04-10 00:54:56.543045 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-10 00:54:56.543048 | orchestrator | Friday 10 April 2026 00:48:57 +0000 (0:00:00.764) 0:04:23.095 ********** 2026-04-10 00:54:56.543052 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-10 00:54:56.543056 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-10 00:54:56.543059 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-10 00:54:56.543063 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-10 00:54:56.543067 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-10 00:54:56.543071 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-10 00:54:56.543075 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-10 00:54:56.543083 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-04-10 00:54:56.543087 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-10 00:54:56.543091 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-04-10 00:54:56.543094 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-10 00:54:56.543098 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-04-10 00:54:56.543102 | orchestrator | 2026-04-10 00:54:56.543108 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-10 00:54:56.543112 | orchestrator | Friday 10 April 2026 00:49:02 +0000 (0:00:05.034) 0:04:28.130 ********** 2026-04-10 00:54:56.543116 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:54:56.543120 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:54:56.543124 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:54:56.543127 | orchestrator | 2026-04-10 00:54:56.543131 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-10 00:54:56.543135 | orchestrator | Friday 10 April 2026 00:49:04 +0000 (0:00:01.750) 0:04:29.880 ********** 2026-04-10 00:54:56.543139 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.543143 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.543146 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.543150 | orchestrator | 2026-04-10 00:54:56.543154 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-10 00:54:56.543158 | orchestrator | Friday 10 April 2026 00:49:04 +0000 (0:00:00.394) 0:04:30.274 ********** 2026-04-10 00:54:56.543162 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.543165 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.543169 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.543173 | orchestrator | 2026-04-10 00:54:56.543176 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-10 00:54:56.543181 | orchestrator | Friday 10 April 2026 00:49:04 +0000 (0:00:00.269) 0:04:30.544 ********** 2026-04-10 00:54:56.543184 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:54:56.543188 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:54:56.543192 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:54:56.543195 | orchestrator | 2026-04-10 00:54:56.543215 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-10 00:54:56.543219 | orchestrator | Friday 10 April 2026 00:49:08 +0000 (0:00:03.274) 0:04:33.819 ********** 2026-04-10 00:54:56.543223 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:54:56.543226 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:54:56.543230 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:54:56.543234 | orchestrator | 2026-04-10 00:54:56.543237 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-10 00:54:56.543241 | orchestrator | Friday 10 April 2026 00:49:09 +0000 (0:00:01.789) 0:04:35.609 ********** 2026-04-10 00:54:56.543245 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.543271 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.543275 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.543279 | orchestrator | 2026-04-10 00:54:56.543283 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-10 00:54:56.543287 | orchestrator | Friday 10 April 2026 00:49:10 +0000 (0:00:00.420) 0:04:36.029 ********** 2026-04-10 00:54:56.543291 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-1, testbed-node-0, testbed-node-2 2026-04-10 00:54:56.543295 | orchestrator | 2026-04-10 00:54:56.543299 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-10 00:54:56.543302 | orchestrator | Friday 10 April 2026 00:49:10 +0000 (0:00:00.499) 0:04:36.529 ********** 2026-04-10 00:54:56.543306 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.543310 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.543314 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.543317 | orchestrator | 2026-04-10 00:54:56.543321 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-10 00:54:56.543325 | orchestrator | Friday 10 April 2026 00:49:11 +0000 (0:00:00.444) 0:04:36.973 ********** 2026-04-10 00:54:56.543333 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.543337 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.543341 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.543345 | orchestrator | 2026-04-10 00:54:56.543349 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-10 00:54:56.543352 | orchestrator | Friday 10 April 2026 00:49:11 +0000 (0:00:00.245) 0:04:37.219 ********** 2026-04-10 00:54:56.543356 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:54:56.543360 | orchestrator | 2026-04-10 00:54:56.543364 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-10 00:54:56.543367 | orchestrator | Friday 10 April 2026 00:49:12 +0000 (0:00:00.450) 0:04:37.670 ********** 2026-04-10 00:54:56.543371 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:54:56.543375 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:54:56.543379 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:54:56.543382 | orchestrator | 2026-04-10 00:54:56.543386 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-10 00:54:56.543390 | orchestrator | Friday 10 April 2026 00:49:14 +0000 (0:00:02.169) 0:04:39.839 ********** 2026-04-10 00:54:56.543393 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:54:56.543397 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:54:56.543401 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:54:56.543405 | orchestrator | 2026-04-10 00:54:56.543408 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-10 00:54:56.543412 | orchestrator | Friday 10 April 2026 00:49:15 +0000 (0:00:01.407) 0:04:41.247 ********** 2026-04-10 00:54:56.543416 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:54:56.543420 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:54:56.543423 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:54:56.543427 | orchestrator | 2026-04-10 00:54:56.543431 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-10 00:54:56.543435 | orchestrator | Friday 10 April 2026 00:49:17 +0000 (0:00:01.908) 0:04:43.155 ********** 2026-04-10 00:54:56.543439 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:54:56.543443 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:54:56.543447 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:54:56.543450 | orchestrator | 2026-04-10 00:54:56.543454 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-10 00:54:56.543458 | orchestrator | Friday 10 April 2026 00:49:19 +0000 (0:00:02.151) 0:04:45.307 ********** 2026-04-10 00:54:56.543462 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:54:56.543465 | orchestrator | 2026-04-10 00:54:56.543472 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-10 00:54:56.543476 | orchestrator | Friday 10 April 2026 00:49:20 +0000 (0:00:00.841) 0:04:46.148 ********** 2026-04-10 00:54:56.543479 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.543483 | orchestrator | 2026-04-10 00:54:56.543487 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-10 00:54:56.543491 | orchestrator | Friday 10 April 2026 00:49:21 +0000 (0:00:00.809) 0:04:46.958 ********** 2026-04-10 00:54:56.543494 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.543498 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.543502 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.543506 | orchestrator | 2026-04-10 00:54:56.543509 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-10 00:54:56.543513 | orchestrator | Friday 10 April 2026 00:49:27 +0000 (0:00:05.942) 0:04:52.900 ********** 2026-04-10 00:54:56.543517 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.543521 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.543524 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.543528 | orchestrator | 2026-04-10 00:54:56.543536 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-10 00:54:56.543540 | orchestrator | Friday 10 April 2026 00:49:27 +0000 (0:00:00.252) 0:04:53.153 ********** 2026-04-10 00:54:56.543558 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b89e089ef5856c3a33a5be47ccf603034588c778'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-10 00:54:56.543566 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b89e089ef5856c3a33a5be47ccf603034588c778'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-10 00:54:56.543571 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b89e089ef5856c3a33a5be47ccf603034588c778'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-10 00:54:56.543576 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b89e089ef5856c3a33a5be47ccf603034588c778'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-10 00:54:56.543581 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b89e089ef5856c3a33a5be47ccf603034588c778'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-10 00:54:56.543585 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b89e089ef5856c3a33a5be47ccf603034588c778'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__b89e089ef5856c3a33a5be47ccf603034588c778'}])  2026-04-10 00:54:56.543591 | orchestrator | 2026-04-10 00:54:56.543594 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-10 00:54:56.543598 | orchestrator | Friday 10 April 2026 00:49:38 +0000 (0:00:10.590) 0:05:03.743 ********** 2026-04-10 00:54:56.543602 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.543606 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.543609 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.543613 | orchestrator | 2026-04-10 00:54:56.543617 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-10 00:54:56.543621 | orchestrator | Friday 10 April 2026 00:49:38 +0000 (0:00:00.313) 0:05:04.057 ********** 2026-04-10 00:54:56.543625 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:54:56.543629 | orchestrator | 2026-04-10 00:54:56.543632 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-10 00:54:56.543636 | orchestrator | Friday 10 April 2026 00:49:39 +0000 (0:00:00.727) 0:05:04.784 ********** 2026-04-10 00:54:56.543640 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.543643 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.543647 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.543651 | orchestrator | 2026-04-10 00:54:56.543657 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-10 00:54:56.543665 | orchestrator | Friday 10 April 2026 00:49:39 +0000 (0:00:00.288) 0:05:05.073 ********** 2026-04-10 00:54:56.543669 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.543672 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.543676 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.543680 | orchestrator | 2026-04-10 00:54:56.543683 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-10 00:54:56.543687 | orchestrator | Friday 10 April 2026 00:49:39 +0000 (0:00:00.302) 0:05:05.375 ********** 2026-04-10 00:54:56.543691 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-10 00:54:56.543695 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-10 00:54:56.543699 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-10 00:54:56.543702 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.543706 | orchestrator | 2026-04-10 00:54:56.543710 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-10 00:54:56.543713 | orchestrator | Friday 10 April 2026 00:49:40 +0000 (0:00:00.832) 0:05:06.208 ********** 2026-04-10 00:54:56.543717 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.543721 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.543725 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.543729 | orchestrator | 2026-04-10 00:54:56.543732 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-04-10 00:54:56.543736 | orchestrator | 2026-04-10 00:54:56.543740 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-10 00:54:56.543755 | orchestrator | Friday 10 April 2026 00:49:41 +0000 (0:00:00.826) 0:05:07.034 ********** 2026-04-10 00:54:56.543760 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:54:56.543764 | orchestrator | 2026-04-10 00:54:56.543767 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-10 00:54:56.543771 | orchestrator | Friday 10 April 2026 00:49:41 +0000 (0:00:00.503) 0:05:07.537 ********** 2026-04-10 00:54:56.543775 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:54:56.543779 | orchestrator | 2026-04-10 00:54:56.543782 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-10 00:54:56.543786 | orchestrator | Friday 10 April 2026 00:49:42 +0000 (0:00:00.720) 0:05:08.258 ********** 2026-04-10 00:54:56.543790 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.543793 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.543797 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.543801 | orchestrator | 2026-04-10 00:54:56.543805 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-10 00:54:56.543808 | orchestrator | Friday 10 April 2026 00:49:43 +0000 (0:00:00.743) 0:05:09.001 ********** 2026-04-10 00:54:56.543812 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.543816 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.543819 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.543823 | orchestrator | 2026-04-10 00:54:56.543827 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-10 00:54:56.543831 | orchestrator | Friday 10 April 2026 00:49:43 +0000 (0:00:00.341) 0:05:09.342 ********** 2026-04-10 00:54:56.543834 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.543838 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.543842 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.543846 | orchestrator | 2026-04-10 00:54:56.543849 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-10 00:54:56.543853 | orchestrator | Friday 10 April 2026 00:49:44 +0000 (0:00:00.385) 0:05:09.728 ********** 2026-04-10 00:54:56.543857 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.543861 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.543868 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.543871 | orchestrator | 2026-04-10 00:54:56.543875 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-10 00:54:56.543879 | orchestrator | Friday 10 April 2026 00:49:44 +0000 (0:00:00.676) 0:05:10.404 ********** 2026-04-10 00:54:56.543883 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.543886 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.543890 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.543894 | orchestrator | 2026-04-10 00:54:56.543898 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-10 00:54:56.543901 | orchestrator | Friday 10 April 2026 00:49:45 +0000 (0:00:00.763) 0:05:11.168 ********** 2026-04-10 00:54:56.543905 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.543909 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.543913 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.543916 | orchestrator | 2026-04-10 00:54:56.543920 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-10 00:54:56.543924 | orchestrator | Friday 10 April 2026 00:49:45 +0000 (0:00:00.214) 0:05:11.382 ********** 2026-04-10 00:54:56.543927 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.543931 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.543935 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.543938 | orchestrator | 2026-04-10 00:54:56.543942 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-10 00:54:56.543946 | orchestrator | Friday 10 April 2026 00:49:45 +0000 (0:00:00.225) 0:05:11.608 ********** 2026-04-10 00:54:56.543950 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.543953 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.543957 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.543961 | orchestrator | 2026-04-10 00:54:56.543965 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-10 00:54:56.543968 | orchestrator | Friday 10 April 2026 00:49:47 +0000 (0:00:01.069) 0:05:12.678 ********** 2026-04-10 00:54:56.543972 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.543976 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.543980 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.543983 | orchestrator | 2026-04-10 00:54:56.543987 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-10 00:54:56.543994 | orchestrator | Friday 10 April 2026 00:49:47 +0000 (0:00:00.858) 0:05:13.537 ********** 2026-04-10 00:54:56.543997 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.544001 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.544005 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.544009 | orchestrator | 2026-04-10 00:54:56.544012 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-10 00:54:56.544016 | orchestrator | Friday 10 April 2026 00:49:48 +0000 (0:00:00.278) 0:05:13.815 ********** 2026-04-10 00:54:56.544020 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.544024 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.544028 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.544031 | orchestrator | 2026-04-10 00:54:56.544035 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-10 00:54:56.544039 | orchestrator | Friday 10 April 2026 00:49:48 +0000 (0:00:00.334) 0:05:14.149 ********** 2026-04-10 00:54:56.544042 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.544046 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.544050 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.544054 | orchestrator | 2026-04-10 00:54:56.544058 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-10 00:54:56.544061 | orchestrator | Friday 10 April 2026 00:49:49 +0000 (0:00:00.580) 0:05:14.729 ********** 2026-04-10 00:54:56.544065 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.544069 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.544073 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.544076 | orchestrator | 2026-04-10 00:54:56.544083 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-10 00:54:56.544099 | orchestrator | Friday 10 April 2026 00:49:49 +0000 (0:00:00.336) 0:05:15.066 ********** 2026-04-10 00:54:56.544104 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.544108 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.544111 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.544115 | orchestrator | 2026-04-10 00:54:56.544119 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-10 00:54:56.544122 | orchestrator | Friday 10 April 2026 00:49:49 +0000 (0:00:00.327) 0:05:15.393 ********** 2026-04-10 00:54:56.544126 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.544131 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.544134 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.544138 | orchestrator | 2026-04-10 00:54:56.544142 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-10 00:54:56.544145 | orchestrator | Friday 10 April 2026 00:49:50 +0000 (0:00:00.292) 0:05:15.686 ********** 2026-04-10 00:54:56.544149 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.544153 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.544156 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.544160 | orchestrator | 2026-04-10 00:54:56.544164 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-10 00:54:56.544168 | orchestrator | Friday 10 April 2026 00:49:50 +0000 (0:00:00.270) 0:05:15.957 ********** 2026-04-10 00:54:56.544171 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.544175 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.544179 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.544182 | orchestrator | 2026-04-10 00:54:56.544186 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-10 00:54:56.544190 | orchestrator | Friday 10 April 2026 00:49:50 +0000 (0:00:00.530) 0:05:16.487 ********** 2026-04-10 00:54:56.544194 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.544197 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.544201 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.544205 | orchestrator | 2026-04-10 00:54:56.544209 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-10 00:54:56.544213 | orchestrator | Friday 10 April 2026 00:49:51 +0000 (0:00:00.312) 0:05:16.799 ********** 2026-04-10 00:54:56.544216 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.544220 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.544224 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.544228 | orchestrator | 2026-04-10 00:54:56.544232 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-10 00:54:56.544235 | orchestrator | Friday 10 April 2026 00:49:51 +0000 (0:00:00.396) 0:05:17.196 ********** 2026-04-10 00:54:56.544239 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-10 00:54:56.544243 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-10 00:54:56.544247 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-10 00:54:56.544263 | orchestrator | 2026-04-10 00:54:56.544266 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-10 00:54:56.544270 | orchestrator | Friday 10 April 2026 00:49:52 +0000 (0:00:00.780) 0:05:17.976 ********** 2026-04-10 00:54:56.544274 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:54:56.544278 | orchestrator | 2026-04-10 00:54:56.544282 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-10 00:54:56.544285 | orchestrator | Friday 10 April 2026 00:49:53 +0000 (0:00:00.648) 0:05:18.625 ********** 2026-04-10 00:54:56.544289 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:54:56.544293 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:54:56.544297 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:54:56.544300 | orchestrator | 2026-04-10 00:54:56.544304 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-10 00:54:56.544311 | orchestrator | Friday 10 April 2026 00:49:53 +0000 (0:00:00.763) 0:05:19.389 ********** 2026-04-10 00:54:56.544315 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.544319 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.544323 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.544326 | orchestrator | 2026-04-10 00:54:56.544330 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-10 00:54:56.544334 | orchestrator | Friday 10 April 2026 00:49:54 +0000 (0:00:00.337) 0:05:19.726 ********** 2026-04-10 00:54:56.544338 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-10 00:54:56.544342 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-10 00:54:56.544348 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-10 00:54:56.544352 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-04-10 00:54:56.544356 | orchestrator | 2026-04-10 00:54:56.544359 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-10 00:54:56.544363 | orchestrator | Friday 10 April 2026 00:50:02 +0000 (0:00:08.675) 0:05:28.402 ********** 2026-04-10 00:54:56.544367 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.544371 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.544375 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.544378 | orchestrator | 2026-04-10 00:54:56.544382 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-10 00:54:56.544386 | orchestrator | Friday 10 April 2026 00:50:03 +0000 (0:00:00.386) 0:05:28.788 ********** 2026-04-10 00:54:56.544390 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-10 00:54:56.544394 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-10 00:54:56.544398 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-10 00:54:56.544402 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-10 00:54:56.544405 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-10 00:54:56.544409 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-10 00:54:56.544413 | orchestrator | 2026-04-10 00:54:56.544417 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-10 00:54:56.544420 | orchestrator | Friday 10 April 2026 00:50:05 +0000 (0:00:02.070) 0:05:30.858 ********** 2026-04-10 00:54:56.544438 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-10 00:54:56.544442 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-10 00:54:56.544446 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-10 00:54:56.544450 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-10 00:54:56.544454 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-10 00:54:56.544457 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-10 00:54:56.544461 | orchestrator | 2026-04-10 00:54:56.544465 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-10 00:54:56.544468 | orchestrator | Friday 10 April 2026 00:50:06 +0000 (0:00:01.201) 0:05:32.060 ********** 2026-04-10 00:54:56.544472 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.544476 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.544480 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.544483 | orchestrator | 2026-04-10 00:54:56.544487 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-10 00:54:56.544491 | orchestrator | Friday 10 April 2026 00:50:07 +0000 (0:00:00.945) 0:05:33.006 ********** 2026-04-10 00:54:56.544494 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.544498 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.544502 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.544505 | orchestrator | 2026-04-10 00:54:56.544509 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-10 00:54:56.544513 | orchestrator | Friday 10 April 2026 00:50:07 +0000 (0:00:00.496) 0:05:33.503 ********** 2026-04-10 00:54:56.544517 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.544525 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.544529 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.544532 | orchestrator | 2026-04-10 00:54:56.544536 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-10 00:54:56.544540 | orchestrator | Friday 10 April 2026 00:50:08 +0000 (0:00:00.273) 0:05:33.776 ********** 2026-04-10 00:54:56.544544 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:54:56.544547 | orchestrator | 2026-04-10 00:54:56.544551 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-10 00:54:56.544555 | orchestrator | Friday 10 April 2026 00:50:08 +0000 (0:00:00.479) 0:05:34.256 ********** 2026-04-10 00:54:56.544558 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.544562 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.544566 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.544570 | orchestrator | 2026-04-10 00:54:56.544573 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-10 00:54:56.544577 | orchestrator | Friday 10 April 2026 00:50:09 +0000 (0:00:00.439) 0:05:34.695 ********** 2026-04-10 00:54:56.544581 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.544585 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.544588 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.544592 | orchestrator | 2026-04-10 00:54:56.544596 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-10 00:54:56.544600 | orchestrator | Friday 10 April 2026 00:50:09 +0000 (0:00:00.283) 0:05:34.978 ********** 2026-04-10 00:54:56.544604 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:54:56.544607 | orchestrator | 2026-04-10 00:54:56.544611 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-10 00:54:56.544615 | orchestrator | Friday 10 April 2026 00:50:09 +0000 (0:00:00.451) 0:05:35.430 ********** 2026-04-10 00:54:56.544618 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:54:56.544622 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:54:56.544626 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:54:56.544630 | orchestrator | 2026-04-10 00:54:56.544634 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-10 00:54:56.544637 | orchestrator | Friday 10 April 2026 00:50:11 +0000 (0:00:01.522) 0:05:36.953 ********** 2026-04-10 00:54:56.544641 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:54:56.544645 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:54:56.544649 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:54:56.544652 | orchestrator | 2026-04-10 00:54:56.544656 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-10 00:54:56.544660 | orchestrator | Friday 10 April 2026 00:50:12 +0000 (0:00:01.192) 0:05:38.145 ********** 2026-04-10 00:54:56.544664 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:54:56.544667 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:54:56.544674 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:54:56.544678 | orchestrator | 2026-04-10 00:54:56.544682 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-10 00:54:56.544686 | orchestrator | Friday 10 April 2026 00:50:14 +0000 (0:00:02.001) 0:05:40.147 ********** 2026-04-10 00:54:56.544689 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:54:56.544693 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:54:56.544697 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:54:56.544701 | orchestrator | 2026-04-10 00:54:56.544705 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-10 00:54:56.544708 | orchestrator | Friday 10 April 2026 00:50:16 +0000 (0:00:01.984) 0:05:42.131 ********** 2026-04-10 00:54:56.544712 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.544716 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.544720 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-04-10 00:54:56.544727 | orchestrator | 2026-04-10 00:54:56.544731 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-04-10 00:54:56.544735 | orchestrator | Friday 10 April 2026 00:50:17 +0000 (0:00:00.583) 0:05:42.715 ********** 2026-04-10 00:54:56.544738 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-04-10 00:54:56.544742 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-04-10 00:54:56.544758 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-10 00:54:56.544763 | orchestrator | 2026-04-10 00:54:56.544767 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-04-10 00:54:56.544770 | orchestrator | Friday 10 April 2026 00:50:30 +0000 (0:00:13.143) 0:05:55.858 ********** 2026-04-10 00:54:56.544774 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-10 00:54:56.544778 | orchestrator | 2026-04-10 00:54:56.544782 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-04-10 00:54:56.544785 | orchestrator | Friday 10 April 2026 00:50:31 +0000 (0:00:01.327) 0:05:57.186 ********** 2026-04-10 00:54:56.544789 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.544793 | orchestrator | 2026-04-10 00:54:56.544797 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-04-10 00:54:56.544800 | orchestrator | Friday 10 April 2026 00:50:31 +0000 (0:00:00.279) 0:05:57.466 ********** 2026-04-10 00:54:56.544804 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.544808 | orchestrator | 2026-04-10 00:54:56.544811 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-04-10 00:54:56.544815 | orchestrator | Friday 10 April 2026 00:50:31 +0000 (0:00:00.117) 0:05:57.584 ********** 2026-04-10 00:54:56.544819 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-04-10 00:54:56.544823 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-04-10 00:54:56.544826 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-04-10 00:54:56.544830 | orchestrator | 2026-04-10 00:54:56.544834 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-04-10 00:54:56.544838 | orchestrator | Friday 10 April 2026 00:50:38 +0000 (0:00:06.044) 0:06:03.628 ********** 2026-04-10 00:54:56.544842 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-04-10 00:54:56.544845 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-04-10 00:54:56.544849 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-04-10 00:54:56.544853 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-04-10 00:54:56.544857 | orchestrator | 2026-04-10 00:54:56.544860 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-10 00:54:56.544864 | orchestrator | Friday 10 April 2026 00:50:42 +0000 (0:00:04.715) 0:06:08.343 ********** 2026-04-10 00:54:56.544868 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:54:56.544872 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:54:56.544875 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:54:56.544879 | orchestrator | 2026-04-10 00:54:56.544883 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-10 00:54:56.544886 | orchestrator | Friday 10 April 2026 00:50:43 +0000 (0:00:00.636) 0:06:08.980 ********** 2026-04-10 00:54:56.544890 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:54:56.544894 | orchestrator | 2026-04-10 00:54:56.544898 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-10 00:54:56.544901 | orchestrator | Friday 10 April 2026 00:50:43 +0000 (0:00:00.464) 0:06:09.444 ********** 2026-04-10 00:54:56.544905 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.544912 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.544916 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.544919 | orchestrator | 2026-04-10 00:54:56.544923 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-10 00:54:56.544927 | orchestrator | Friday 10 April 2026 00:50:44 +0000 (0:00:00.448) 0:06:09.892 ********** 2026-04-10 00:54:56.544930 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:54:56.544934 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:54:56.544938 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:54:56.544942 | orchestrator | 2026-04-10 00:54:56.544945 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-10 00:54:56.544949 | orchestrator | Friday 10 April 2026 00:50:45 +0000 (0:00:01.181) 0:06:11.074 ********** 2026-04-10 00:54:56.544953 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-10 00:54:56.544956 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-10 00:54:56.544960 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-10 00:54:56.544964 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.544968 | orchestrator | 2026-04-10 00:54:56.544974 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-10 00:54:56.544978 | orchestrator | Friday 10 April 2026 00:50:45 +0000 (0:00:00.528) 0:06:11.602 ********** 2026-04-10 00:54:56.544982 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.544986 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.544989 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.544993 | orchestrator | 2026-04-10 00:54:56.544997 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-04-10 00:54:56.545001 | orchestrator | 2026-04-10 00:54:56.545004 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-10 00:54:56.545008 | orchestrator | Friday 10 April 2026 00:50:46 +0000 (0:00:00.482) 0:06:12.085 ********** 2026-04-10 00:54:56.545012 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:54:56.545016 | orchestrator | 2026-04-10 00:54:56.545020 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-10 00:54:56.545023 | orchestrator | Friday 10 April 2026 00:50:47 +0000 (0:00:00.593) 0:06:12.679 ********** 2026-04-10 00:54:56.545027 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:54:56.545031 | orchestrator | 2026-04-10 00:54:56.545034 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-10 00:54:56.545050 | orchestrator | Friday 10 April 2026 00:50:47 +0000 (0:00:00.433) 0:06:13.112 ********** 2026-04-10 00:54:56.545055 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.545058 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.545062 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.545066 | orchestrator | 2026-04-10 00:54:56.545070 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-10 00:54:56.545073 | orchestrator | Friday 10 April 2026 00:50:47 +0000 (0:00:00.379) 0:06:13.491 ********** 2026-04-10 00:54:56.545077 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.545081 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.545085 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.545088 | orchestrator | 2026-04-10 00:54:56.545092 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-10 00:54:56.545096 | orchestrator | Friday 10 April 2026 00:50:48 +0000 (0:00:00.679) 0:06:14.171 ********** 2026-04-10 00:54:56.545099 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.545103 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.545107 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.545110 | orchestrator | 2026-04-10 00:54:56.545114 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-10 00:54:56.545118 | orchestrator | Friday 10 April 2026 00:50:49 +0000 (0:00:00.653) 0:06:14.825 ********** 2026-04-10 00:54:56.545125 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.545128 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.545132 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.545136 | orchestrator | 2026-04-10 00:54:56.545140 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-10 00:54:56.545143 | orchestrator | Friday 10 April 2026 00:50:49 +0000 (0:00:00.665) 0:06:15.491 ********** 2026-04-10 00:54:56.545147 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.545151 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.545155 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.545158 | orchestrator | 2026-04-10 00:54:56.545162 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-10 00:54:56.545166 | orchestrator | Friday 10 April 2026 00:50:50 +0000 (0:00:00.443) 0:06:15.934 ********** 2026-04-10 00:54:56.545170 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.545173 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.545177 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.545181 | orchestrator | 2026-04-10 00:54:56.545184 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-10 00:54:56.545188 | orchestrator | Friday 10 April 2026 00:50:50 +0000 (0:00:00.263) 0:06:16.198 ********** 2026-04-10 00:54:56.545192 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.545196 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.545199 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.545203 | orchestrator | 2026-04-10 00:54:56.545207 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-10 00:54:56.545210 | orchestrator | Friday 10 April 2026 00:50:50 +0000 (0:00:00.272) 0:06:16.470 ********** 2026-04-10 00:54:56.545214 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.545218 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.545222 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.545225 | orchestrator | 2026-04-10 00:54:56.545229 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-10 00:54:56.545233 | orchestrator | Friday 10 April 2026 00:50:51 +0000 (0:00:00.638) 0:06:17.109 ********** 2026-04-10 00:54:56.545237 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.545241 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.545244 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.545248 | orchestrator | 2026-04-10 00:54:56.545264 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-10 00:54:56.545268 | orchestrator | Friday 10 April 2026 00:50:52 +0000 (0:00:00.903) 0:06:18.012 ********** 2026-04-10 00:54:56.545272 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.545275 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.545279 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.545283 | orchestrator | 2026-04-10 00:54:56.545287 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-10 00:54:56.545290 | orchestrator | Friday 10 April 2026 00:50:52 +0000 (0:00:00.301) 0:06:18.314 ********** 2026-04-10 00:54:56.545294 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.545298 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.545301 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.545305 | orchestrator | 2026-04-10 00:54:56.545309 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-10 00:54:56.545312 | orchestrator | Friday 10 April 2026 00:50:52 +0000 (0:00:00.260) 0:06:18.575 ********** 2026-04-10 00:54:56.545316 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.545320 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.545326 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.545330 | orchestrator | 2026-04-10 00:54:56.545334 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-10 00:54:56.545338 | orchestrator | Friday 10 April 2026 00:50:53 +0000 (0:00:00.319) 0:06:18.895 ********** 2026-04-10 00:54:56.545341 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.545345 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.545363 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.545367 | orchestrator | 2026-04-10 00:54:56.545371 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-10 00:54:56.545374 | orchestrator | Friday 10 April 2026 00:50:53 +0000 (0:00:00.466) 0:06:19.361 ********** 2026-04-10 00:54:56.545378 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.545382 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.545386 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.545389 | orchestrator | 2026-04-10 00:54:56.545393 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-10 00:54:56.545397 | orchestrator | Friday 10 April 2026 00:50:54 +0000 (0:00:00.283) 0:06:19.645 ********** 2026-04-10 00:54:56.545401 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.545404 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.545408 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.545412 | orchestrator | 2026-04-10 00:54:56.545416 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-10 00:54:56.545419 | orchestrator | Friday 10 April 2026 00:50:54 +0000 (0:00:00.253) 0:06:19.898 ********** 2026-04-10 00:54:56.545423 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.545427 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.545433 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.545437 | orchestrator | 2026-04-10 00:54:56.545441 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-10 00:54:56.545445 | orchestrator | Friday 10 April 2026 00:50:54 +0000 (0:00:00.249) 0:06:20.148 ********** 2026-04-10 00:54:56.545448 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.545452 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.545456 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.545459 | orchestrator | 2026-04-10 00:54:56.545463 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-10 00:54:56.545467 | orchestrator | Friday 10 April 2026 00:50:54 +0000 (0:00:00.421) 0:06:20.569 ********** 2026-04-10 00:54:56.545471 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.545474 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.545478 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.545482 | orchestrator | 2026-04-10 00:54:56.545485 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-10 00:54:56.545489 | orchestrator | Friday 10 April 2026 00:50:55 +0000 (0:00:00.338) 0:06:20.908 ********** 2026-04-10 00:54:56.545493 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.545497 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.545500 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.545504 | orchestrator | 2026-04-10 00:54:56.545508 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-10 00:54:56.545511 | orchestrator | Friday 10 April 2026 00:50:55 +0000 (0:00:00.440) 0:06:21.349 ********** 2026-04-10 00:54:56.545515 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.545519 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.545523 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.545526 | orchestrator | 2026-04-10 00:54:56.545530 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-10 00:54:56.545534 | orchestrator | Friday 10 April 2026 00:50:56 +0000 (0:00:00.461) 0:06:21.810 ********** 2026-04-10 00:54:56.545538 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-10 00:54:56.545542 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-10 00:54:56.545545 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-10 00:54:56.545549 | orchestrator | 2026-04-10 00:54:56.545553 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-10 00:54:56.545557 | orchestrator | Friday 10 April 2026 00:50:56 +0000 (0:00:00.543) 0:06:22.354 ********** 2026-04-10 00:54:56.545560 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:54:56.545567 | orchestrator | 2026-04-10 00:54:56.545571 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-10 00:54:56.545575 | orchestrator | Friday 10 April 2026 00:50:57 +0000 (0:00:00.438) 0:06:22.792 ********** 2026-04-10 00:54:56.545579 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.545583 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.545586 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.545590 | orchestrator | 2026-04-10 00:54:56.545594 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-10 00:54:56.545598 | orchestrator | Friday 10 April 2026 00:50:57 +0000 (0:00:00.245) 0:06:23.038 ********** 2026-04-10 00:54:56.545601 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.545605 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.545609 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.545613 | orchestrator | 2026-04-10 00:54:56.545617 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-10 00:54:56.545620 | orchestrator | Friday 10 April 2026 00:50:57 +0000 (0:00:00.414) 0:06:23.453 ********** 2026-04-10 00:54:56.545624 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.545628 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.545631 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.545635 | orchestrator | 2026-04-10 00:54:56.545639 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-10 00:54:56.545643 | orchestrator | Friday 10 April 2026 00:50:58 +0000 (0:00:00.579) 0:06:24.032 ********** 2026-04-10 00:54:56.545646 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.545650 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.545654 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.545657 | orchestrator | 2026-04-10 00:54:56.545661 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-10 00:54:56.545665 | orchestrator | Friday 10 April 2026 00:50:58 +0000 (0:00:00.281) 0:06:24.314 ********** 2026-04-10 00:54:56.545671 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-10 00:54:56.545675 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-10 00:54:56.545679 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-10 00:54:56.545682 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-10 00:54:56.545686 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-10 00:54:56.545690 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-10 00:54:56.545694 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-10 00:54:56.545698 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-10 00:54:56.545701 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-10 00:54:56.545705 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-10 00:54:56.545709 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-10 00:54:56.545718 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-10 00:54:56.545722 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-10 00:54:56.545725 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-10 00:54:56.545729 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-10 00:54:56.545733 | orchestrator | 2026-04-10 00:54:56.545736 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-10 00:54:56.545743 | orchestrator | Friday 10 April 2026 00:51:03 +0000 (0:00:05.098) 0:06:29.413 ********** 2026-04-10 00:54:56.545747 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.545751 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.545754 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.545758 | orchestrator | 2026-04-10 00:54:56.545762 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-10 00:54:56.545766 | orchestrator | Friday 10 April 2026 00:51:04 +0000 (0:00:00.395) 0:06:29.809 ********** 2026-04-10 00:54:56.545770 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:54:56.545774 | orchestrator | 2026-04-10 00:54:56.545777 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-10 00:54:56.545781 | orchestrator | Friday 10 April 2026 00:51:04 +0000 (0:00:00.370) 0:06:30.179 ********** 2026-04-10 00:54:56.545785 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-10 00:54:56.545789 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-10 00:54:56.545792 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-10 00:54:56.545796 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-04-10 00:54:56.545800 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-04-10 00:54:56.545804 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-04-10 00:54:56.545807 | orchestrator | 2026-04-10 00:54:56.545811 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-10 00:54:56.545815 | orchestrator | Friday 10 April 2026 00:51:05 +0000 (0:00:00.980) 0:06:31.159 ********** 2026-04-10 00:54:56.545819 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-10 00:54:56.545822 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-10 00:54:56.545826 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-10 00:54:56.545830 | orchestrator | 2026-04-10 00:54:56.545834 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-10 00:54:56.545837 | orchestrator | Friday 10 April 2026 00:51:07 +0000 (0:00:01.932) 0:06:33.092 ********** 2026-04-10 00:54:56.545841 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-10 00:54:56.545845 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-10 00:54:56.545849 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:54:56.545852 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-10 00:54:56.545856 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-10 00:54:56.545860 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:54:56.545864 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-10 00:54:56.545867 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-10 00:54:56.545871 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:54:56.545875 | orchestrator | 2026-04-10 00:54:56.545878 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-10 00:54:56.545882 | orchestrator | Friday 10 April 2026 00:51:08 +0000 (0:00:01.321) 0:06:34.414 ********** 2026-04-10 00:54:56.545886 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-10 00:54:56.545890 | orchestrator | 2026-04-10 00:54:56.545893 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-10 00:54:56.545897 | orchestrator | Friday 10 April 2026 00:51:10 +0000 (0:00:01.754) 0:06:36.168 ********** 2026-04-10 00:54:56.545901 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:54:56.545905 | orchestrator | 2026-04-10 00:54:56.545908 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-04-10 00:54:56.545915 | orchestrator | Friday 10 April 2026 00:51:10 +0000 (0:00:00.439) 0:06:36.607 ********** 2026-04-10 00:54:56.545919 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-465b2d07-90ab-575b-b156-9a24eede9b64', 'data_vg': 'ceph-465b2d07-90ab-575b-b156-9a24eede9b64'}) 2026-04-10 00:54:56.545926 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-4a24d887-4b45-578e-8445-fe6f68cb2659', 'data_vg': 'ceph-4a24d887-4b45-578e-8445-fe6f68cb2659'}) 2026-04-10 00:54:56.545930 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-09201c46-e11a-5302-956e-912d17e7f9de', 'data_vg': 'ceph-09201c46-e11a-5302-956e-912d17e7f9de'}) 2026-04-10 00:54:56.545934 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a684d377-5ec1-594b-83a4-e92528b1ce81', 'data_vg': 'ceph-a684d377-5ec1-594b-83a4-e92528b1ce81'}) 2026-04-10 00:54:56.545938 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-83f5954c-7956-54fb-af17-18f84b92edf0', 'data_vg': 'ceph-83f5954c-7956-54fb-af17-18f84b92edf0'}) 2026-04-10 00:54:56.545942 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-0863171e-1302-565f-bee5-d18b6804a785', 'data_vg': 'ceph-0863171e-1302-565f-bee5-d18b6804a785'}) 2026-04-10 00:54:56.545945 | orchestrator | 2026-04-10 00:54:56.545951 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-10 00:54:56.545955 | orchestrator | Friday 10 April 2026 00:51:49 +0000 (0:00:38.929) 0:07:15.537 ********** 2026-04-10 00:54:56.545959 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.545963 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.545966 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.545970 | orchestrator | 2026-04-10 00:54:56.545974 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-10 00:54:56.545978 | orchestrator | Friday 10 April 2026 00:51:50 +0000 (0:00:00.275) 0:07:15.813 ********** 2026-04-10 00:54:56.545981 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:54:56.545985 | orchestrator | 2026-04-10 00:54:56.545989 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-10 00:54:56.545993 | orchestrator | Friday 10 April 2026 00:51:50 +0000 (0:00:00.460) 0:07:16.273 ********** 2026-04-10 00:54:56.545996 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.546000 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.546004 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.546008 | orchestrator | 2026-04-10 00:54:56.546012 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-10 00:54:56.546048 | orchestrator | Friday 10 April 2026 00:51:51 +0000 (0:00:00.802) 0:07:17.076 ********** 2026-04-10 00:54:56.546053 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.546056 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.546060 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.546064 | orchestrator | 2026-04-10 00:54:56.546068 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-10 00:54:56.546071 | orchestrator | Friday 10 April 2026 00:51:52 +0000 (0:00:01.486) 0:07:18.563 ********** 2026-04-10 00:54:56.546075 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:54:56.546079 | orchestrator | 2026-04-10 00:54:56.546083 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-10 00:54:56.546087 | orchestrator | Friday 10 April 2026 00:51:53 +0000 (0:00:00.457) 0:07:19.020 ********** 2026-04-10 00:54:56.546090 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:54:56.546094 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:54:56.546098 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:54:56.546102 | orchestrator | 2026-04-10 00:54:56.546106 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-10 00:54:56.546109 | orchestrator | Friday 10 April 2026 00:51:54 +0000 (0:00:01.319) 0:07:20.339 ********** 2026-04-10 00:54:56.546113 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:54:56.546117 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:54:56.546121 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:54:56.546124 | orchestrator | 2026-04-10 00:54:56.546132 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-10 00:54:56.546135 | orchestrator | Friday 10 April 2026 00:51:55 +0000 (0:00:01.162) 0:07:21.502 ********** 2026-04-10 00:54:56.546139 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:54:56.546143 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:54:56.546147 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:54:56.546151 | orchestrator | 2026-04-10 00:54:56.546154 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-10 00:54:56.546158 | orchestrator | Friday 10 April 2026 00:51:57 +0000 (0:00:01.816) 0:07:23.319 ********** 2026-04-10 00:54:56.546162 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.546166 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.546170 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.546173 | orchestrator | 2026-04-10 00:54:56.546177 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-10 00:54:56.546181 | orchestrator | Friday 10 April 2026 00:51:58 +0000 (0:00:00.303) 0:07:23.622 ********** 2026-04-10 00:54:56.546185 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.546188 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.546192 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.546196 | orchestrator | 2026-04-10 00:54:56.546200 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-10 00:54:56.546203 | orchestrator | Friday 10 April 2026 00:51:58 +0000 (0:00:00.504) 0:07:24.126 ********** 2026-04-10 00:54:56.546207 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-04-10 00:54:56.546211 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-04-10 00:54:56.546215 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-04-10 00:54:56.546222 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-10 00:54:56.546226 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-04-10 00:54:56.546229 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-04-10 00:54:56.546233 | orchestrator | 2026-04-10 00:54:56.546237 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-10 00:54:56.546241 | orchestrator | Friday 10 April 2026 00:51:59 +0000 (0:00:01.052) 0:07:25.179 ********** 2026-04-10 00:54:56.546245 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-04-10 00:54:56.546276 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-04-10 00:54:56.546281 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-10 00:54:56.546285 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-10 00:54:56.546289 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-04-10 00:54:56.546293 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-04-10 00:54:56.546296 | orchestrator | 2026-04-10 00:54:56.546300 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-10 00:54:56.546304 | orchestrator | Friday 10 April 2026 00:52:01 +0000 (0:00:02.120) 0:07:27.299 ********** 2026-04-10 00:54:56.546308 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-04-10 00:54:56.546312 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-04-10 00:54:56.546315 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-10 00:54:56.546319 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-10 00:54:56.546323 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-04-10 00:54:56.546327 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-04-10 00:54:56.546331 | orchestrator | 2026-04-10 00:54:56.546337 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-10 00:54:56.546341 | orchestrator | Friday 10 April 2026 00:52:05 +0000 (0:00:03.861) 0:07:31.161 ********** 2026-04-10 00:54:56.546345 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.546349 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.546353 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-10 00:54:56.546356 | orchestrator | 2026-04-10 00:54:56.546360 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-10 00:54:56.546364 | orchestrator | Friday 10 April 2026 00:52:08 +0000 (0:00:02.477) 0:07:33.639 ********** 2026-04-10 00:54:56.546373 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.546377 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.546381 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-04-10 00:54:56.546384 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-10 00:54:56.546388 | orchestrator | 2026-04-10 00:54:56.546392 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-10 00:54:56.546396 | orchestrator | Friday 10 April 2026 00:52:20 +0000 (0:00:12.464) 0:07:46.103 ********** 2026-04-10 00:54:56.546399 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.546403 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.546407 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.546411 | orchestrator | 2026-04-10 00:54:56.546415 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-10 00:54:56.546418 | orchestrator | Friday 10 April 2026 00:52:21 +0000 (0:00:00.902) 0:07:47.006 ********** 2026-04-10 00:54:56.546422 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.546426 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.546430 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.546434 | orchestrator | 2026-04-10 00:54:56.546437 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-10 00:54:56.546441 | orchestrator | Friday 10 April 2026 00:52:21 +0000 (0:00:00.303) 0:07:47.309 ********** 2026-04-10 00:54:56.546445 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:54:56.546449 | orchestrator | 2026-04-10 00:54:56.546453 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-10 00:54:56.546456 | orchestrator | Friday 10 April 2026 00:52:22 +0000 (0:00:00.464) 0:07:47.773 ********** 2026-04-10 00:54:56.546460 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-10 00:54:56.546464 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-10 00:54:56.546468 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-10 00:54:56.546472 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.546476 | orchestrator | 2026-04-10 00:54:56.546479 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-10 00:54:56.546483 | orchestrator | Friday 10 April 2026 00:52:22 +0000 (0:00:00.522) 0:07:48.296 ********** 2026-04-10 00:54:56.546487 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.546491 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.546494 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.546498 | orchestrator | 2026-04-10 00:54:56.546502 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-10 00:54:56.546506 | orchestrator | Friday 10 April 2026 00:52:23 +0000 (0:00:00.525) 0:07:48.821 ********** 2026-04-10 00:54:56.546510 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.546513 | orchestrator | 2026-04-10 00:54:56.546517 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-10 00:54:56.546521 | orchestrator | Friday 10 April 2026 00:52:23 +0000 (0:00:00.204) 0:07:49.026 ********** 2026-04-10 00:54:56.546525 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.546529 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.546532 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.546536 | orchestrator | 2026-04-10 00:54:56.546540 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-10 00:54:56.546544 | orchestrator | Friday 10 April 2026 00:52:23 +0000 (0:00:00.263) 0:07:49.289 ********** 2026-04-10 00:54:56.546547 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.546551 | orchestrator | 2026-04-10 00:54:56.546555 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-10 00:54:56.546559 | orchestrator | Friday 10 April 2026 00:52:23 +0000 (0:00:00.223) 0:07:49.512 ********** 2026-04-10 00:54:56.546568 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.546572 | orchestrator | 2026-04-10 00:54:56.546576 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-10 00:54:56.546580 | orchestrator | Friday 10 April 2026 00:52:24 +0000 (0:00:00.208) 0:07:49.721 ********** 2026-04-10 00:54:56.546584 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.546587 | orchestrator | 2026-04-10 00:54:56.546591 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-10 00:54:56.546595 | orchestrator | Friday 10 April 2026 00:52:24 +0000 (0:00:00.111) 0:07:49.832 ********** 2026-04-10 00:54:56.546599 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.546602 | orchestrator | 2026-04-10 00:54:56.546606 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-10 00:54:56.546610 | orchestrator | Friday 10 April 2026 00:52:24 +0000 (0:00:00.202) 0:07:50.035 ********** 2026-04-10 00:54:56.546614 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.546617 | orchestrator | 2026-04-10 00:54:56.546621 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-10 00:54:56.546625 | orchestrator | Friday 10 April 2026 00:52:24 +0000 (0:00:00.180) 0:07:50.215 ********** 2026-04-10 00:54:56.546629 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-10 00:54:56.546633 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-10 00:54:56.546636 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-10 00:54:56.546640 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.546644 | orchestrator | 2026-04-10 00:54:56.546650 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-10 00:54:56.546654 | orchestrator | Friday 10 April 2026 00:52:25 +0000 (0:00:00.676) 0:07:50.891 ********** 2026-04-10 00:54:56.546658 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.546662 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.546665 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.546669 | orchestrator | 2026-04-10 00:54:56.546673 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-10 00:54:56.546677 | orchestrator | Friday 10 April 2026 00:52:25 +0000 (0:00:00.423) 0:07:51.315 ********** 2026-04-10 00:54:56.546681 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.546684 | orchestrator | 2026-04-10 00:54:56.546688 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-10 00:54:56.546692 | orchestrator | Friday 10 April 2026 00:52:25 +0000 (0:00:00.192) 0:07:51.508 ********** 2026-04-10 00:54:56.546696 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.546700 | orchestrator | 2026-04-10 00:54:56.546703 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-04-10 00:54:56.546707 | orchestrator | 2026-04-10 00:54:56.546711 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-10 00:54:56.546715 | orchestrator | Friday 10 April 2026 00:52:26 +0000 (0:00:00.661) 0:07:52.169 ********** 2026-04-10 00:54:56.546719 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:54:56.546723 | orchestrator | 2026-04-10 00:54:56.546726 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-10 00:54:56.546730 | orchestrator | Friday 10 April 2026 00:52:27 +0000 (0:00:01.037) 0:07:53.206 ********** 2026-04-10 00:54:56.546734 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:54:56.546738 | orchestrator | 2026-04-10 00:54:56.546742 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-10 00:54:56.546745 | orchestrator | Friday 10 April 2026 00:52:28 +0000 (0:00:00.971) 0:07:54.178 ********** 2026-04-10 00:54:56.546749 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.546756 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.546760 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.546763 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.546767 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.546771 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.546775 | orchestrator | 2026-04-10 00:54:56.546778 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-10 00:54:56.546782 | orchestrator | Friday 10 April 2026 00:52:29 +0000 (0:00:00.685) 0:07:54.864 ********** 2026-04-10 00:54:56.546786 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.546790 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.546794 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.546797 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.546801 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.546805 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.546809 | orchestrator | 2026-04-10 00:54:56.546813 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-10 00:54:56.546817 | orchestrator | Friday 10 April 2026 00:52:30 +0000 (0:00:00.980) 0:07:55.845 ********** 2026-04-10 00:54:56.546820 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.546824 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.546828 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.546832 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.546835 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.546839 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.546843 | orchestrator | 2026-04-10 00:54:56.546847 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-10 00:54:56.546850 | orchestrator | Friday 10 April 2026 00:52:31 +0000 (0:00:01.144) 0:07:56.989 ********** 2026-04-10 00:54:56.546854 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.546858 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.546862 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.546865 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.546869 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.546873 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.546877 | orchestrator | 2026-04-10 00:54:56.546880 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-10 00:54:56.546887 | orchestrator | Friday 10 April 2026 00:52:32 +0000 (0:00:00.953) 0:07:57.942 ********** 2026-04-10 00:54:56.546891 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.546895 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.546899 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.546902 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.546906 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.546910 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.546914 | orchestrator | 2026-04-10 00:54:56.546917 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-10 00:54:56.546921 | orchestrator | Friday 10 April 2026 00:52:33 +0000 (0:00:00.880) 0:07:58.822 ********** 2026-04-10 00:54:56.546925 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.546929 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.546933 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.546936 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.546940 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.546944 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.546948 | orchestrator | 2026-04-10 00:54:56.546951 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-10 00:54:56.546955 | orchestrator | Friday 10 April 2026 00:52:33 +0000 (0:00:00.596) 0:07:59.419 ********** 2026-04-10 00:54:56.546959 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.546963 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.546967 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.546970 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.546974 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.546982 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.546985 | orchestrator | 2026-04-10 00:54:56.546991 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-10 00:54:56.546995 | orchestrator | Friday 10 April 2026 00:52:34 +0000 (0:00:00.823) 0:08:00.243 ********** 2026-04-10 00:54:56.546999 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.547003 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.547007 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.547010 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.547014 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.547018 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.547022 | orchestrator | 2026-04-10 00:54:56.547026 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-10 00:54:56.547029 | orchestrator | Friday 10 April 2026 00:52:35 +0000 (0:00:01.058) 0:08:01.301 ********** 2026-04-10 00:54:56.547033 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.547037 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.547040 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.547044 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.547048 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.547052 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.547055 | orchestrator | 2026-04-10 00:54:56.547059 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-10 00:54:56.547063 | orchestrator | Friday 10 April 2026 00:52:36 +0000 (0:00:01.275) 0:08:02.577 ********** 2026-04-10 00:54:56.547067 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.547071 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.547074 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.547078 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.547082 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.547086 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.547089 | orchestrator | 2026-04-10 00:54:56.547093 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-10 00:54:56.547097 | orchestrator | Friday 10 April 2026 00:52:37 +0000 (0:00:00.585) 0:08:03.162 ********** 2026-04-10 00:54:56.547101 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.547105 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.547108 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.547112 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.547116 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.547120 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.547123 | orchestrator | 2026-04-10 00:54:56.547127 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-10 00:54:56.547131 | orchestrator | Friday 10 April 2026 00:52:38 +0000 (0:00:00.626) 0:08:03.789 ********** 2026-04-10 00:54:56.547135 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.547139 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.547142 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.547146 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.547150 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.547154 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.547157 | orchestrator | 2026-04-10 00:54:56.547161 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-10 00:54:56.547165 | orchestrator | Friday 10 April 2026 00:52:38 +0000 (0:00:00.535) 0:08:04.324 ********** 2026-04-10 00:54:56.547169 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.547172 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.547176 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.547180 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.547184 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.547188 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.547191 | orchestrator | 2026-04-10 00:54:56.547195 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-10 00:54:56.547199 | orchestrator | Friday 10 April 2026 00:52:39 +0000 (0:00:00.724) 0:08:05.049 ********** 2026-04-10 00:54:56.547207 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.547211 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.547215 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.547219 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.547222 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.547226 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.547230 | orchestrator | 2026-04-10 00:54:56.547234 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-10 00:54:56.547237 | orchestrator | Friday 10 April 2026 00:52:39 +0000 (0:00:00.500) 0:08:05.550 ********** 2026-04-10 00:54:56.547241 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.547245 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.547269 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.547273 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.547277 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.547280 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.547284 | orchestrator | 2026-04-10 00:54:56.547290 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-10 00:54:56.547294 | orchestrator | Friday 10 April 2026 00:52:40 +0000 (0:00:00.548) 0:08:06.098 ********** 2026-04-10 00:54:56.547298 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:54:56.547302 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:54:56.547306 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:54:56.547309 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.547313 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.547317 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.547321 | orchestrator | 2026-04-10 00:54:56.547324 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-10 00:54:56.547328 | orchestrator | Friday 10 April 2026 00:52:41 +0000 (0:00:00.757) 0:08:06.856 ********** 2026-04-10 00:54:56.547332 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.547336 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.547339 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.547343 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.547347 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.547351 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.547355 | orchestrator | 2026-04-10 00:54:56.547358 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-10 00:54:56.547362 | orchestrator | Friday 10 April 2026 00:52:41 +0000 (0:00:00.562) 0:08:07.418 ********** 2026-04-10 00:54:56.547366 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.547370 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.547373 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.547377 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.547381 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.547384 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.547388 | orchestrator | 2026-04-10 00:54:56.547394 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-10 00:54:56.547398 | orchestrator | Friday 10 April 2026 00:52:42 +0000 (0:00:00.847) 0:08:08.266 ********** 2026-04-10 00:54:56.547402 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.547405 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.547409 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.547413 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.547417 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.547420 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.547424 | orchestrator | 2026-04-10 00:54:56.547428 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-04-10 00:54:56.547432 | orchestrator | Friday 10 April 2026 00:52:43 +0000 (0:00:01.179) 0:08:09.446 ********** 2026-04-10 00:54:56.547435 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:54:56.547439 | orchestrator | 2026-04-10 00:54:56.547443 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-04-10 00:54:56.547447 | orchestrator | Friday 10 April 2026 00:52:47 +0000 (0:00:03.273) 0:08:12.719 ********** 2026-04-10 00:54:56.547455 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.547459 | orchestrator | 2026-04-10 00:54:56.547462 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-04-10 00:54:56.547466 | orchestrator | Friday 10 April 2026 00:52:48 +0000 (0:00:01.606) 0:08:14.326 ********** 2026-04-10 00:54:56.547470 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.547474 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:54:56.547478 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:54:56.547481 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:54:56.547485 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:54:56.547489 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:54:56.547492 | orchestrator | 2026-04-10 00:54:56.547496 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-04-10 00:54:56.547500 | orchestrator | Friday 10 April 2026 00:52:50 +0000 (0:00:01.627) 0:08:15.953 ********** 2026-04-10 00:54:56.547504 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:54:56.547508 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:54:56.547511 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:54:56.547515 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:54:56.547519 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:54:56.547523 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:54:56.547526 | orchestrator | 2026-04-10 00:54:56.547530 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-04-10 00:54:56.547534 | orchestrator | Friday 10 April 2026 00:52:51 +0000 (0:00:00.920) 0:08:16.874 ********** 2026-04-10 00:54:56.547538 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:54:56.547542 | orchestrator | 2026-04-10 00:54:56.547546 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-04-10 00:54:56.547549 | orchestrator | Friday 10 April 2026 00:52:52 +0000 (0:00:01.189) 0:08:18.064 ********** 2026-04-10 00:54:56.547553 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:54:56.547557 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:54:56.547561 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:54:56.547565 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:54:56.547568 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:54:56.547572 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:54:56.547576 | orchestrator | 2026-04-10 00:54:56.547580 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-04-10 00:54:56.547583 | orchestrator | Friday 10 April 2026 00:52:54 +0000 (0:00:01.909) 0:08:19.974 ********** 2026-04-10 00:54:56.547587 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:54:56.547591 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:54:56.547595 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:54:56.547598 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:54:56.547602 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:54:56.547606 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:54:56.547610 | orchestrator | 2026-04-10 00:54:56.547613 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-04-10 00:54:56.547617 | orchestrator | Friday 10 April 2026 00:52:57 +0000 (0:00:03.628) 0:08:23.602 ********** 2026-04-10 00:54:56.547621 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:54:56.547625 | orchestrator | 2026-04-10 00:54:56.547629 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-04-10 00:54:56.547635 | orchestrator | Friday 10 April 2026 00:52:59 +0000 (0:00:01.287) 0:08:24.890 ********** 2026-04-10 00:54:56.547639 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.547643 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.547647 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.547650 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.547654 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.547662 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.547666 | orchestrator | 2026-04-10 00:54:56.547669 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-04-10 00:54:56.547673 | orchestrator | Friday 10 April 2026 00:52:59 +0000 (0:00:00.623) 0:08:25.514 ********** 2026-04-10 00:54:56.547677 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:54:56.547681 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:54:56.547684 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:54:56.547688 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:54:56.547692 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:54:56.547696 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:54:56.547699 | orchestrator | 2026-04-10 00:54:56.547703 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-04-10 00:54:56.547707 | orchestrator | Friday 10 April 2026 00:53:02 +0000 (0:00:03.066) 0:08:28.580 ********** 2026-04-10 00:54:56.547711 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:54:56.547714 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:54:56.547718 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:54:56.547722 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.547726 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.547729 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.547733 | orchestrator | 2026-04-10 00:54:56.547739 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-04-10 00:54:56.547743 | orchestrator | 2026-04-10 00:54:56.547747 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-10 00:54:56.547751 | orchestrator | Friday 10 April 2026 00:53:03 +0000 (0:00:00.904) 0:08:29.485 ********** 2026-04-10 00:54:56.547755 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:54:56.547759 | orchestrator | 2026-04-10 00:54:56.547762 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-10 00:54:56.547766 | orchestrator | Friday 10 April 2026 00:53:04 +0000 (0:00:00.426) 0:08:29.911 ********** 2026-04-10 00:54:56.547770 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:54:56.547774 | orchestrator | 2026-04-10 00:54:56.547777 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-10 00:54:56.547781 | orchestrator | Friday 10 April 2026 00:53:04 +0000 (0:00:00.617) 0:08:30.529 ********** 2026-04-10 00:54:56.547785 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.547789 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.547792 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.547796 | orchestrator | 2026-04-10 00:54:56.547800 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-10 00:54:56.547804 | orchestrator | Friday 10 April 2026 00:53:05 +0000 (0:00:00.256) 0:08:30.785 ********** 2026-04-10 00:54:56.547807 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.547811 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.547815 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.547818 | orchestrator | 2026-04-10 00:54:56.547822 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-10 00:54:56.547826 | orchestrator | Friday 10 April 2026 00:53:05 +0000 (0:00:00.674) 0:08:31.460 ********** 2026-04-10 00:54:56.547830 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.547834 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.547837 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.547841 | orchestrator | 2026-04-10 00:54:56.547845 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-10 00:54:56.547849 | orchestrator | Friday 10 April 2026 00:53:06 +0000 (0:00:00.654) 0:08:32.114 ********** 2026-04-10 00:54:56.547852 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.547856 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.547860 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.547863 | orchestrator | 2026-04-10 00:54:56.547870 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-10 00:54:56.547874 | orchestrator | Friday 10 April 2026 00:53:07 +0000 (0:00:00.960) 0:08:33.075 ********** 2026-04-10 00:54:56.547878 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.547881 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.547885 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.547889 | orchestrator | 2026-04-10 00:54:56.547893 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-10 00:54:56.547897 | orchestrator | Friday 10 April 2026 00:53:07 +0000 (0:00:00.284) 0:08:33.359 ********** 2026-04-10 00:54:56.547900 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.547904 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.547908 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.547911 | orchestrator | 2026-04-10 00:54:56.547915 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-10 00:54:56.547919 | orchestrator | Friday 10 April 2026 00:53:07 +0000 (0:00:00.253) 0:08:33.613 ********** 2026-04-10 00:54:56.547923 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.547926 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.547930 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.547934 | orchestrator | 2026-04-10 00:54:56.547938 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-10 00:54:56.547941 | orchestrator | Friday 10 April 2026 00:53:08 +0000 (0:00:00.284) 0:08:33.897 ********** 2026-04-10 00:54:56.547945 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.547949 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.547953 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.547957 | orchestrator | 2026-04-10 00:54:56.547960 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-10 00:54:56.547964 | orchestrator | Friday 10 April 2026 00:53:09 +0000 (0:00:00.867) 0:08:34.764 ********** 2026-04-10 00:54:56.547968 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.547972 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.547975 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.547979 | orchestrator | 2026-04-10 00:54:56.547985 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-10 00:54:56.547989 | orchestrator | Friday 10 April 2026 00:53:09 +0000 (0:00:00.740) 0:08:35.505 ********** 2026-04-10 00:54:56.547993 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.547997 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.548000 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.548004 | orchestrator | 2026-04-10 00:54:56.548008 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-10 00:54:56.548012 | orchestrator | Friday 10 April 2026 00:53:10 +0000 (0:00:00.265) 0:08:35.770 ********** 2026-04-10 00:54:56.548015 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.548019 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.548023 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.548027 | orchestrator | 2026-04-10 00:54:56.548030 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-10 00:54:56.548034 | orchestrator | Friday 10 April 2026 00:53:10 +0000 (0:00:00.278) 0:08:36.048 ********** 2026-04-10 00:54:56.548038 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.548041 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.548045 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.548049 | orchestrator | 2026-04-10 00:54:56.548053 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-10 00:54:56.548056 | orchestrator | Friday 10 April 2026 00:53:11 +0000 (0:00:00.713) 0:08:36.762 ********** 2026-04-10 00:54:56.548060 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.548064 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.548068 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.548071 | orchestrator | 2026-04-10 00:54:56.548077 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-10 00:54:56.548084 | orchestrator | Friday 10 April 2026 00:53:11 +0000 (0:00:00.340) 0:08:37.103 ********** 2026-04-10 00:54:56.548088 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.548092 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.548095 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.548099 | orchestrator | 2026-04-10 00:54:56.548103 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-10 00:54:56.548107 | orchestrator | Friday 10 April 2026 00:53:11 +0000 (0:00:00.343) 0:08:37.446 ********** 2026-04-10 00:54:56.548111 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.548114 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.548118 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.548122 | orchestrator | 2026-04-10 00:54:56.548126 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-10 00:54:56.548129 | orchestrator | Friday 10 April 2026 00:53:12 +0000 (0:00:00.308) 0:08:37.755 ********** 2026-04-10 00:54:56.548133 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.548137 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.548141 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.548144 | orchestrator | 2026-04-10 00:54:56.548148 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-10 00:54:56.548152 | orchestrator | Friday 10 April 2026 00:53:12 +0000 (0:00:00.717) 0:08:38.472 ********** 2026-04-10 00:54:56.548156 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.548159 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.548163 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.548167 | orchestrator | 2026-04-10 00:54:56.548171 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-10 00:54:56.548174 | orchestrator | Friday 10 April 2026 00:53:13 +0000 (0:00:00.322) 0:08:38.795 ********** 2026-04-10 00:54:56.548178 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.548182 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.548186 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.548189 | orchestrator | 2026-04-10 00:54:56.548193 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-10 00:54:56.548197 | orchestrator | Friday 10 April 2026 00:53:13 +0000 (0:00:00.362) 0:08:39.157 ********** 2026-04-10 00:54:56.548201 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.548205 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.548208 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.548212 | orchestrator | 2026-04-10 00:54:56.548216 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-10 00:54:56.548219 | orchestrator | Friday 10 April 2026 00:53:14 +0000 (0:00:00.942) 0:08:40.100 ********** 2026-04-10 00:54:56.548223 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.548227 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.548231 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-04-10 00:54:56.548235 | orchestrator | 2026-04-10 00:54:56.548238 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-04-10 00:54:56.548242 | orchestrator | Friday 10 April 2026 00:53:14 +0000 (0:00:00.511) 0:08:40.611 ********** 2026-04-10 00:54:56.548246 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-10 00:54:56.548269 | orchestrator | 2026-04-10 00:54:56.548273 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-04-10 00:54:56.548277 | orchestrator | Friday 10 April 2026 00:53:16 +0000 (0:00:01.748) 0:08:42.360 ********** 2026-04-10 00:54:56.548282 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-04-10 00:54:56.548287 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.548291 | orchestrator | 2026-04-10 00:54:56.548295 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-04-10 00:54:56.548302 | orchestrator | Friday 10 April 2026 00:53:16 +0000 (0:00:00.228) 0:08:42.589 ********** 2026-04-10 00:54:56.548307 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-10 00:54:56.548317 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-10 00:54:56.548321 | orchestrator | 2026-04-10 00:54:56.548325 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-04-10 00:54:56.548328 | orchestrator | Friday 10 April 2026 00:53:23 +0000 (0:00:06.238) 0:08:48.828 ********** 2026-04-10 00:54:56.548332 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-10 00:54:56.548336 | orchestrator | 2026-04-10 00:54:56.548339 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-10 00:54:56.548343 | orchestrator | Friday 10 April 2026 00:53:26 +0000 (0:00:02.881) 0:08:51.709 ********** 2026-04-10 00:54:56.548347 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:54:56.548351 | orchestrator | 2026-04-10 00:54:56.548354 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-10 00:54:56.548358 | orchestrator | Friday 10 April 2026 00:53:26 +0000 (0:00:00.826) 0:08:52.536 ********** 2026-04-10 00:54:56.548362 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-10 00:54:56.548368 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-10 00:54:56.548372 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-10 00:54:56.548375 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-04-10 00:54:56.548379 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-04-10 00:54:56.548383 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-04-10 00:54:56.548387 | orchestrator | 2026-04-10 00:54:56.548391 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-10 00:54:56.548394 | orchestrator | Friday 10 April 2026 00:53:28 +0000 (0:00:01.124) 0:08:53.660 ********** 2026-04-10 00:54:56.548398 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-10 00:54:56.548402 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-10 00:54:56.548406 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-10 00:54:56.548409 | orchestrator | 2026-04-10 00:54:56.548413 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-10 00:54:56.548417 | orchestrator | Friday 10 April 2026 00:53:29 +0000 (0:00:01.698) 0:08:55.359 ********** 2026-04-10 00:54:56.548421 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-10 00:54:56.548425 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-10 00:54:56.548429 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:54:56.548432 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-10 00:54:56.548436 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-10 00:54:56.548440 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:54:56.548444 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-10 00:54:56.548448 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-10 00:54:56.548451 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:54:56.548455 | orchestrator | 2026-04-10 00:54:56.548459 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-10 00:54:56.548463 | orchestrator | Friday 10 April 2026 00:53:31 +0000 (0:00:01.370) 0:08:56.730 ********** 2026-04-10 00:54:56.548472 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:54:56.548476 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:54:56.548480 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:54:56.548483 | orchestrator | 2026-04-10 00:54:56.548487 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-10 00:54:56.548491 | orchestrator | Friday 10 April 2026 00:53:33 +0000 (0:00:02.602) 0:08:59.332 ********** 2026-04-10 00:54:56.548495 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.548499 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.548502 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.548506 | orchestrator | 2026-04-10 00:54:56.548510 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-10 00:54:56.548514 | orchestrator | Friday 10 April 2026 00:53:34 +0000 (0:00:00.319) 0:08:59.652 ********** 2026-04-10 00:54:56.548518 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:54:56.548522 | orchestrator | 2026-04-10 00:54:56.548525 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-10 00:54:56.548529 | orchestrator | Friday 10 April 2026 00:53:34 +0000 (0:00:00.466) 0:09:00.118 ********** 2026-04-10 00:54:56.548533 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:54:56.548537 | orchestrator | 2026-04-10 00:54:56.548541 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-10 00:54:56.548544 | orchestrator | Friday 10 April 2026 00:53:35 +0000 (0:00:00.668) 0:09:00.786 ********** 2026-04-10 00:54:56.548548 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:54:56.548552 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:54:56.548556 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:54:56.548559 | orchestrator | 2026-04-10 00:54:56.548563 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-10 00:54:56.548567 | orchestrator | Friday 10 April 2026 00:53:36 +0000 (0:00:01.501) 0:09:02.288 ********** 2026-04-10 00:54:56.548594 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:54:56.548598 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:54:56.548602 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:54:56.548606 | orchestrator | 2026-04-10 00:54:56.548612 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-10 00:54:56.548616 | orchestrator | Friday 10 April 2026 00:53:37 +0000 (0:00:01.327) 0:09:03.615 ********** 2026-04-10 00:54:56.548619 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:54:56.548623 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:54:56.548627 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:54:56.548631 | orchestrator | 2026-04-10 00:54:56.548635 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-10 00:54:56.548638 | orchestrator | Friday 10 April 2026 00:53:40 +0000 (0:00:02.553) 0:09:06.169 ********** 2026-04-10 00:54:56.548642 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:54:56.548646 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:54:56.548649 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:54:56.548653 | orchestrator | 2026-04-10 00:54:56.548657 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-10 00:54:56.548661 | orchestrator | Friday 10 April 2026 00:53:42 +0000 (0:00:02.054) 0:09:08.224 ********** 2026-04-10 00:54:56.548664 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.548668 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.548672 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.548676 | orchestrator | 2026-04-10 00:54:56.548679 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-10 00:54:56.548683 | orchestrator | Friday 10 April 2026 00:53:44 +0000 (0:00:01.579) 0:09:09.803 ********** 2026-04-10 00:54:56.548687 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:54:56.548691 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:54:56.548694 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:54:56.548701 | orchestrator | 2026-04-10 00:54:56.548707 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-10 00:54:56.548711 | orchestrator | Friday 10 April 2026 00:53:44 +0000 (0:00:00.689) 0:09:10.492 ********** 2026-04-10 00:54:56.548715 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:54:56.548719 | orchestrator | 2026-04-10 00:54:56.548722 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-10 00:54:56.548726 | orchestrator | Friday 10 April 2026 00:53:45 +0000 (0:00:00.598) 0:09:11.091 ********** 2026-04-10 00:54:56.548730 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.548733 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.548737 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.548741 | orchestrator | 2026-04-10 00:54:56.548745 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-10 00:54:56.548748 | orchestrator | Friday 10 April 2026 00:53:46 +0000 (0:00:00.629) 0:09:11.720 ********** 2026-04-10 00:54:56.548752 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:54:56.548756 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:54:56.548760 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:54:56.548763 | orchestrator | 2026-04-10 00:54:56.548767 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-10 00:54:56.548771 | orchestrator | Friday 10 April 2026 00:53:47 +0000 (0:00:01.187) 0:09:12.908 ********** 2026-04-10 00:54:56.548775 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-10 00:54:56.548779 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-10 00:54:56.548783 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-10 00:54:56.548786 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.548790 | orchestrator | 2026-04-10 00:54:56.548794 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-10 00:54:56.548798 | orchestrator | Friday 10 April 2026 00:53:48 +0000 (0:00:00.937) 0:09:13.846 ********** 2026-04-10 00:54:56.548802 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.548805 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.548809 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.548813 | orchestrator | 2026-04-10 00:54:56.548816 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-10 00:54:56.548820 | orchestrator | 2026-04-10 00:54:56.548824 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-10 00:54:56.548828 | orchestrator | Friday 10 April 2026 00:53:49 +0000 (0:00:01.459) 0:09:15.306 ********** 2026-04-10 00:54:56.548832 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:54:56.548836 | orchestrator | 2026-04-10 00:54:56.548839 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-10 00:54:56.548843 | orchestrator | Friday 10 April 2026 00:53:50 +0000 (0:00:00.657) 0:09:15.964 ********** 2026-04-10 00:54:56.548847 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:54:56.548851 | orchestrator | 2026-04-10 00:54:56.548854 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-10 00:54:56.548858 | orchestrator | Friday 10 April 2026 00:53:51 +0000 (0:00:00.860) 0:09:16.824 ********** 2026-04-10 00:54:56.548862 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.548866 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.548869 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.548873 | orchestrator | 2026-04-10 00:54:56.548877 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-10 00:54:56.548881 | orchestrator | Friday 10 April 2026 00:53:51 +0000 (0:00:00.327) 0:09:17.152 ********** 2026-04-10 00:54:56.548884 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.548888 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.548895 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.548899 | orchestrator | 2026-04-10 00:54:56.548902 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-10 00:54:56.548906 | orchestrator | Friday 10 April 2026 00:53:52 +0000 (0:00:00.771) 0:09:17.924 ********** 2026-04-10 00:54:56.548910 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.548914 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.548917 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.548921 | orchestrator | 2026-04-10 00:54:56.548925 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-10 00:54:56.548931 | orchestrator | Friday 10 April 2026 00:53:52 +0000 (0:00:00.661) 0:09:18.586 ********** 2026-04-10 00:54:56.548935 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.548939 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.548942 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.548946 | orchestrator | 2026-04-10 00:54:56.548950 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-10 00:54:56.548954 | orchestrator | Friday 10 April 2026 00:53:53 +0000 (0:00:00.961) 0:09:19.548 ********** 2026-04-10 00:54:56.548957 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.548961 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.548965 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.548969 | orchestrator | 2026-04-10 00:54:56.548973 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-10 00:54:56.548976 | orchestrator | Friday 10 April 2026 00:53:54 +0000 (0:00:00.343) 0:09:19.891 ********** 2026-04-10 00:54:56.548980 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.548984 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.548988 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.548991 | orchestrator | 2026-04-10 00:54:56.548995 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-10 00:54:56.548999 | orchestrator | Friday 10 April 2026 00:53:54 +0000 (0:00:00.364) 0:09:20.255 ********** 2026-04-10 00:54:56.549002 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.549006 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.549010 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.549014 | orchestrator | 2026-04-10 00:54:56.549018 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-10 00:54:56.549023 | orchestrator | Friday 10 April 2026 00:53:54 +0000 (0:00:00.325) 0:09:20.581 ********** 2026-04-10 00:54:56.549027 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.549031 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.549035 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.549038 | orchestrator | 2026-04-10 00:54:56.549042 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-10 00:54:56.549046 | orchestrator | Friday 10 April 2026 00:53:56 +0000 (0:00:01.212) 0:09:21.794 ********** 2026-04-10 00:54:56.549050 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.549053 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.549057 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.549061 | orchestrator | 2026-04-10 00:54:56.549065 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-10 00:54:56.549068 | orchestrator | Friday 10 April 2026 00:53:56 +0000 (0:00:00.788) 0:09:22.582 ********** 2026-04-10 00:54:56.549072 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.549076 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.549080 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.549083 | orchestrator | 2026-04-10 00:54:56.549087 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-10 00:54:56.549091 | orchestrator | Friday 10 April 2026 00:53:57 +0000 (0:00:00.296) 0:09:22.879 ********** 2026-04-10 00:54:56.549095 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.549098 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.549102 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.549106 | orchestrator | 2026-04-10 00:54:56.549112 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-10 00:54:56.549116 | orchestrator | Friday 10 April 2026 00:53:57 +0000 (0:00:00.308) 0:09:23.187 ********** 2026-04-10 00:54:56.549120 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.549125 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.549133 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.549139 | orchestrator | 2026-04-10 00:54:56.549145 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-10 00:54:56.549152 | orchestrator | Friday 10 April 2026 00:53:57 +0000 (0:00:00.329) 0:09:23.517 ********** 2026-04-10 00:54:56.549158 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.549164 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.549170 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.549176 | orchestrator | 2026-04-10 00:54:56.549181 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-10 00:54:56.549187 | orchestrator | Friday 10 April 2026 00:53:58 +0000 (0:00:00.742) 0:09:24.260 ********** 2026-04-10 00:54:56.549193 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.549199 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.549205 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.549210 | orchestrator | 2026-04-10 00:54:56.549217 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-10 00:54:56.549222 | orchestrator | Friday 10 April 2026 00:53:58 +0000 (0:00:00.343) 0:09:24.604 ********** 2026-04-10 00:54:56.549229 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.549236 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.549243 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.549259 | orchestrator | 2026-04-10 00:54:56.549266 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-10 00:54:56.549272 | orchestrator | Friday 10 April 2026 00:53:59 +0000 (0:00:00.327) 0:09:24.932 ********** 2026-04-10 00:54:56.549277 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.549283 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.549288 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.549294 | orchestrator | 2026-04-10 00:54:56.549299 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-10 00:54:56.549305 | orchestrator | Friday 10 April 2026 00:53:59 +0000 (0:00:00.311) 0:09:25.243 ********** 2026-04-10 00:54:56.549310 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.549316 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.549321 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.549326 | orchestrator | 2026-04-10 00:54:56.549332 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-10 00:54:56.549337 | orchestrator | Friday 10 April 2026 00:54:00 +0000 (0:00:00.742) 0:09:25.986 ********** 2026-04-10 00:54:56.549343 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.549349 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.549354 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.549359 | orchestrator | 2026-04-10 00:54:56.549365 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-10 00:54:56.549370 | orchestrator | Friday 10 April 2026 00:54:00 +0000 (0:00:00.370) 0:09:26.356 ********** 2026-04-10 00:54:56.549376 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.549387 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.549395 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.549402 | orchestrator | 2026-04-10 00:54:56.549409 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-10 00:54:56.549414 | orchestrator | Friday 10 April 2026 00:54:01 +0000 (0:00:00.551) 0:09:26.908 ********** 2026-04-10 00:54:56.549420 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:54:56.549425 | orchestrator | 2026-04-10 00:54:56.549431 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-10 00:54:56.549437 | orchestrator | Friday 10 April 2026 00:54:02 +0000 (0:00:00.938) 0:09:27.846 ********** 2026-04-10 00:54:56.549448 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-10 00:54:56.549454 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-10 00:54:56.549460 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-10 00:54:56.549466 | orchestrator | 2026-04-10 00:54:56.549472 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-10 00:54:56.549478 | orchestrator | Friday 10 April 2026 00:54:04 +0000 (0:00:01.803) 0:09:29.650 ********** 2026-04-10 00:54:56.549484 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-10 00:54:56.549490 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-10 00:54:56.549496 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:54:56.549508 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-10 00:54:56.549515 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-10 00:54:56.549519 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:54:56.549523 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-10 00:54:56.549526 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-10 00:54:56.549530 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:54:56.549534 | orchestrator | 2026-04-10 00:54:56.549538 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-10 00:54:56.549541 | orchestrator | Friday 10 April 2026 00:54:05 +0000 (0:00:01.300) 0:09:30.951 ********** 2026-04-10 00:54:56.549545 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.549549 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.549552 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.549556 | orchestrator | 2026-04-10 00:54:56.549560 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-10 00:54:56.549564 | orchestrator | Friday 10 April 2026 00:54:05 +0000 (0:00:00.634) 0:09:31.585 ********** 2026-04-10 00:54:56.549567 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:54:56.549571 | orchestrator | 2026-04-10 00:54:56.549575 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-10 00:54:56.549579 | orchestrator | Friday 10 April 2026 00:54:06 +0000 (0:00:00.553) 0:09:32.139 ********** 2026-04-10 00:54:56.549582 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-10 00:54:56.549587 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-10 00:54:56.549591 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-10 00:54:56.549594 | orchestrator | 2026-04-10 00:54:56.549598 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-10 00:54:56.549602 | orchestrator | Friday 10 April 2026 00:54:07 +0000 (0:00:00.876) 0:09:33.015 ********** 2026-04-10 00:54:56.549606 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-10 00:54:56.549609 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-10 00:54:56.549613 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-10 00:54:56.549617 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-10 00:54:56.549620 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-10 00:54:56.549624 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-10 00:54:56.549628 | orchestrator | 2026-04-10 00:54:56.549635 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-10 00:54:56.549639 | orchestrator | Friday 10 April 2026 00:54:11 +0000 (0:00:04.195) 0:09:37.210 ********** 2026-04-10 00:54:56.549643 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-10 00:54:56.549647 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-10 00:54:56.549650 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-10 00:54:56.549654 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-10 00:54:56.549658 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-10 00:54:56.549661 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-10 00:54:56.549665 | orchestrator | 2026-04-10 00:54:56.549671 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-10 00:54:56.549680 | orchestrator | Friday 10 April 2026 00:54:13 +0000 (0:00:01.822) 0:09:39.032 ********** 2026-04-10 00:54:56.549687 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-10 00:54:56.549693 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:54:56.549699 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-10 00:54:56.549704 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:54:56.549710 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-10 00:54:56.549716 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:54:56.549723 | orchestrator | 2026-04-10 00:54:56.549729 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-10 00:54:56.549735 | orchestrator | Friday 10 April 2026 00:54:14 +0000 (0:00:01.296) 0:09:40.329 ********** 2026-04-10 00:54:56.549742 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-04-10 00:54:56.549748 | orchestrator | 2026-04-10 00:54:56.549754 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-10 00:54:56.549760 | orchestrator | Friday 10 April 2026 00:54:14 +0000 (0:00:00.208) 0:09:40.538 ********** 2026-04-10 00:54:56.549766 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-10 00:54:56.549774 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-10 00:54:56.549785 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-10 00:54:56.549789 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-10 00:54:56.549793 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-10 00:54:56.549796 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.549800 | orchestrator | 2026-04-10 00:54:56.549804 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-10 00:54:56.549808 | orchestrator | Friday 10 April 2026 00:54:15 +0000 (0:00:00.853) 0:09:41.391 ********** 2026-04-10 00:54:56.549811 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-10 00:54:56.549815 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-10 00:54:56.549819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-10 00:54:56.549822 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-10 00:54:56.549826 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-10 00:54:56.549834 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.549838 | orchestrator | 2026-04-10 00:54:56.549841 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-10 00:54:56.549845 | orchestrator | Friday 10 April 2026 00:54:16 +0000 (0:00:00.795) 0:09:42.187 ********** 2026-04-10 00:54:56.549849 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-10 00:54:56.549853 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-10 00:54:56.549856 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-10 00:54:56.549860 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-10 00:54:56.549864 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-10 00:54:56.549868 | orchestrator | 2026-04-10 00:54:56.549871 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-10 00:54:56.549875 | orchestrator | Friday 10 April 2026 00:54:40 +0000 (0:00:23.849) 0:10:06.037 ********** 2026-04-10 00:54:56.549879 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.549883 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.549886 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.549890 | orchestrator | 2026-04-10 00:54:56.549895 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-10 00:54:56.549902 | orchestrator | Friday 10 April 2026 00:54:41 +0000 (0:00:00.798) 0:10:06.835 ********** 2026-04-10 00:54:56.549908 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.549914 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.549920 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.549925 | orchestrator | 2026-04-10 00:54:56.549931 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-10 00:54:56.549937 | orchestrator | Friday 10 April 2026 00:54:41 +0000 (0:00:00.332) 0:10:07.167 ********** 2026-04-10 00:54:56.549947 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:54:56.549954 | orchestrator | 2026-04-10 00:54:56.549960 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-10 00:54:56.549966 | orchestrator | Friday 10 April 2026 00:54:42 +0000 (0:00:00.527) 0:10:07.695 ********** 2026-04-10 00:54:56.549972 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:54:56.549979 | orchestrator | 2026-04-10 00:54:56.549985 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-10 00:54:56.549991 | orchestrator | Friday 10 April 2026 00:54:42 +0000 (0:00:00.815) 0:10:08.510 ********** 2026-04-10 00:54:56.549998 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:54:56.550004 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:54:56.550011 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:54:56.550056 | orchestrator | 2026-04-10 00:54:56.550060 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-10 00:54:56.550064 | orchestrator | Friday 10 April 2026 00:54:44 +0000 (0:00:01.221) 0:10:09.732 ********** 2026-04-10 00:54:56.550068 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:54:56.550071 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:54:56.550075 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:54:56.550079 | orchestrator | 2026-04-10 00:54:56.550082 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-10 00:54:56.550095 | orchestrator | Friday 10 April 2026 00:54:45 +0000 (0:00:01.158) 0:10:10.890 ********** 2026-04-10 00:54:56.550099 | orchestrator | changed: [testbed-node-3] 2026-04-10 00:54:56.550104 | orchestrator | changed: [testbed-node-4] 2026-04-10 00:54:56.550110 | orchestrator | changed: [testbed-node-5] 2026-04-10 00:54:56.550116 | orchestrator | 2026-04-10 00:54:56.550123 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-10 00:54:56.550128 | orchestrator | Friday 10 April 2026 00:54:47 +0000 (0:00:02.158) 0:10:13.048 ********** 2026-04-10 00:54:56.550134 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-10 00:54:56.550140 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-10 00:54:56.550147 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-10 00:54:56.550153 | orchestrator | 2026-04-10 00:54:56.550159 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-10 00:54:56.550165 | orchestrator | Friday 10 April 2026 00:54:49 +0000 (0:00:02.435) 0:10:15.483 ********** 2026-04-10 00:54:56.550172 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.550178 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.550184 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.550189 | orchestrator | 2026-04-10 00:54:56.550193 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-10 00:54:56.550200 | orchestrator | Friday 10 April 2026 00:54:50 +0000 (0:00:00.802) 0:10:16.286 ********** 2026-04-10 00:54:56.550206 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:54:56.550212 | orchestrator | 2026-04-10 00:54:56.550218 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-10 00:54:56.550224 | orchestrator | Friday 10 April 2026 00:54:51 +0000 (0:00:00.533) 0:10:16.820 ********** 2026-04-10 00:54:56.550229 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.550235 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.550241 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.550247 | orchestrator | 2026-04-10 00:54:56.550302 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-10 00:54:56.550309 | orchestrator | Friday 10 April 2026 00:54:51 +0000 (0:00:00.337) 0:10:17.158 ********** 2026-04-10 00:54:56.550315 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.550321 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:54:56.550328 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:54:56.550332 | orchestrator | 2026-04-10 00:54:56.550335 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-10 00:54:56.550339 | orchestrator | Friday 10 April 2026 00:54:52 +0000 (0:00:00.597) 0:10:17.755 ********** 2026-04-10 00:54:56.550343 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-10 00:54:56.550347 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-10 00:54:56.550351 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-10 00:54:56.550354 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:54:56.550358 | orchestrator | 2026-04-10 00:54:56.550362 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-10 00:54:56.550366 | orchestrator | Friday 10 April 2026 00:54:52 +0000 (0:00:00.628) 0:10:18.383 ********** 2026-04-10 00:54:56.550370 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:54:56.550373 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:54:56.550377 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:54:56.550381 | orchestrator | 2026-04-10 00:54:56.550384 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:54:56.550388 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2026-04-10 00:54:56.550398 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-04-10 00:54:56.550408 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-04-10 00:54:56.550412 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2026-04-10 00:54:56.550416 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-04-10 00:54:56.550420 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-04-10 00:54:56.550423 | orchestrator | 2026-04-10 00:54:56.550427 | orchestrator | 2026-04-10 00:54:56.550431 | orchestrator | 2026-04-10 00:54:56.550435 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:54:56.550438 | orchestrator | Friday 10 April 2026 00:54:53 +0000 (0:00:00.253) 0:10:18.637 ********** 2026-04-10 00:54:56.550442 | orchestrator | =============================================================================== 2026-04-10 00:54:56.550446 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 82.42s 2026-04-10 00:54:56.550449 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 38.93s 2026-04-10 00:54:56.550453 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 23.85s 2026-04-10 00:54:56.550457 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 13.14s 2026-04-10 00:54:56.550464 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.46s 2026-04-10 00:54:56.550468 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 10.59s 2026-04-10 00:54:56.550472 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 8.68s 2026-04-10 00:54:56.550476 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.67s 2026-04-10 00:54:56.550479 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 6.24s 2026-04-10 00:54:56.550483 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.04s 2026-04-10 00:54:56.550487 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 5.94s 2026-04-10 00:54:56.550490 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 5.10s 2026-04-10 00:54:56.550494 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 5.04s 2026-04-10 00:54:56.550498 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.78s 2026-04-10 00:54:56.550501 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.72s 2026-04-10 00:54:56.550513 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.20s 2026-04-10 00:54:56.550517 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.86s 2026-04-10 00:54:56.550521 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.63s 2026-04-10 00:54:56.550524 | orchestrator | ceph-mon : Generate initial monmap -------------------------------------- 3.27s 2026-04-10 00:54:56.550528 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.27s 2026-04-10 00:54:56.550532 | orchestrator | 2026-04-10 00:54:56 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:54:56.550536 | orchestrator | 2026-04-10 00:54:56 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:54:59.585073 | orchestrator | 2026-04-10 00:54:59 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:54:59.585163 | orchestrator | 2026-04-10 00:54:59 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:54:59.585195 | orchestrator | 2026-04-10 00:54:59 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:54:59.585203 | orchestrator | 2026-04-10 00:54:59 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:55:02.629859 | orchestrator | 2026-04-10 00:55:02 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:55:02.632545 | orchestrator | 2026-04-10 00:55:02 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:55:02.634541 | orchestrator | 2026-04-10 00:55:02 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:55:02.634600 | orchestrator | 2026-04-10 00:55:02 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:55:05.674796 | orchestrator | 2026-04-10 00:55:05 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:55:05.676673 | orchestrator | 2026-04-10 00:55:05 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:55:05.678329 | orchestrator | 2026-04-10 00:55:05 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:55:05.678589 | orchestrator | 2026-04-10 00:55:05 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:55:08.727906 | orchestrator | 2026-04-10 00:55:08 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:55:08.731845 | orchestrator | 2026-04-10 00:55:08 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:55:08.733660 | orchestrator | 2026-04-10 00:55:08 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:55:08.733939 | orchestrator | 2026-04-10 00:55:08 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:55:11.784356 | orchestrator | 2026-04-10 00:55:11 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:55:11.788609 | orchestrator | 2026-04-10 00:55:11 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:55:11.790205 | orchestrator | 2026-04-10 00:55:11 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:55:11.790272 | orchestrator | 2026-04-10 00:55:11 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:55:14.837388 | orchestrator | 2026-04-10 00:55:14 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:55:14.839600 | orchestrator | 2026-04-10 00:55:14 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:55:14.840492 | orchestrator | 2026-04-10 00:55:14 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:55:14.840525 | orchestrator | 2026-04-10 00:55:14 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:55:17.894596 | orchestrator | 2026-04-10 00:55:17 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:55:17.895482 | orchestrator | 2026-04-10 00:55:17 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:55:17.896945 | orchestrator | 2026-04-10 00:55:17 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:55:17.896974 | orchestrator | 2026-04-10 00:55:17 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:55:20.939528 | orchestrator | 2026-04-10 00:55:20 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:55:20.942097 | orchestrator | 2026-04-10 00:55:20 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:55:20.943037 | orchestrator | 2026-04-10 00:55:20 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:55:20.943630 | orchestrator | 2026-04-10 00:55:20 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:55:23.984744 | orchestrator | 2026-04-10 00:55:23 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:55:23.988053 | orchestrator | 2026-04-10 00:55:23 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:55:23.988098 | orchestrator | 2026-04-10 00:55:23 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:55:23.988104 | orchestrator | 2026-04-10 00:55:23 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:55:27.045460 | orchestrator | 2026-04-10 00:55:27 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:55:27.049111 | orchestrator | 2026-04-10 00:55:27 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:55:27.051943 | orchestrator | 2026-04-10 00:55:27 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:55:27.051995 | orchestrator | 2026-04-10 00:55:27 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:55:30.093321 | orchestrator | 2026-04-10 00:55:30 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:55:30.096198 | orchestrator | 2026-04-10 00:55:30 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:55:30.098525 | orchestrator | 2026-04-10 00:55:30 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:55:30.098579 | orchestrator | 2026-04-10 00:55:30 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:55:33.140178 | orchestrator | 2026-04-10 00:55:33 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:55:33.141702 | orchestrator | 2026-04-10 00:55:33 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:55:33.143499 | orchestrator | 2026-04-10 00:55:33 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:55:33.143535 | orchestrator | 2026-04-10 00:55:33 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:55:36.190286 | orchestrator | 2026-04-10 00:55:36 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:55:36.192544 | orchestrator | 2026-04-10 00:55:36 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state STARTED 2026-04-10 00:55:36.195485 | orchestrator | 2026-04-10 00:55:36 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:55:36.195593 | orchestrator | 2026-04-10 00:55:36 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:55:39.249829 | orchestrator | 2026-04-10 00:55:39 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:55:39.251799 | orchestrator | 2026-04-10 00:55:39 | INFO  | Task 9e8efa2e-fa5d-4eaf-9f59-1e5b03981523 is in state SUCCESS 2026-04-10 00:55:39.252567 | orchestrator | 2026-04-10 00:55:39.252602 | orchestrator | 2026-04-10 00:55:39.252608 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-10 00:55:39.252614 | orchestrator | 2026-04-10 00:55:39.252618 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-10 00:55:39.252623 | orchestrator | Friday 10 April 2026 00:53:08 +0000 (0:00:00.307) 0:00:00.307 ********** 2026-04-10 00:55:39.252627 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:55:39.252632 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:55:39.252636 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:55:39.252640 | orchestrator | 2026-04-10 00:55:39.252644 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-10 00:55:39.252668 | orchestrator | Friday 10 April 2026 00:53:09 +0000 (0:00:00.286) 0:00:00.594 ********** 2026-04-10 00:55:39.252674 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-04-10 00:55:39.252678 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-04-10 00:55:39.252682 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-04-10 00:55:39.252686 | orchestrator | 2026-04-10 00:55:39.252690 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-04-10 00:55:39.252693 | orchestrator | 2026-04-10 00:55:39.252697 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-10 00:55:39.252701 | orchestrator | Friday 10 April 2026 00:53:09 +0000 (0:00:00.289) 0:00:00.883 ********** 2026-04-10 00:55:39.252705 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:55:39.252710 | orchestrator | 2026-04-10 00:55:39.252779 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-04-10 00:55:39.252786 | orchestrator | Friday 10 April 2026 00:53:10 +0000 (0:00:00.513) 0:00:01.396 ********** 2026-04-10 00:55:39.252790 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-10 00:55:39.252794 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-10 00:55:39.252798 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-10 00:55:39.252801 | orchestrator | 2026-04-10 00:55:39.252805 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-04-10 00:55:39.252809 | orchestrator | Friday 10 April 2026 00:53:11 +0000 (0:00:01.014) 0:00:02.411 ********** 2026-04-10 00:55:39.252816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-10 00:55:39.252826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-10 00:55:39.252849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-10 00:55:39.252861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-10 00:55:39.252866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-10 00:55:39.252870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-10 00:55:39.252875 | orchestrator | 2026-04-10 00:55:39.252879 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-10 00:55:39.252882 | orchestrator | Friday 10 April 2026 00:53:12 +0000 (0:00:01.419) 0:00:03.831 ********** 2026-04-10 00:55:39.252886 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:55:39.252890 | orchestrator | 2026-04-10 00:55:39.252897 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-04-10 00:55:39.252905 | orchestrator | Friday 10 April 2026 00:53:13 +0000 (0:00:00.542) 0:00:04.374 ********** 2026-04-10 00:55:39.252916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-10 00:55:39.252921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-10 00:55:39.252925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-10 00:55:39.252929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-10 00:55:39.252940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-10 00:55:39.252948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-10 00:55:39.252952 | orchestrator | 2026-04-10 00:55:39.252957 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-04-10 00:55:39.252961 | orchestrator | Friday 10 April 2026 00:53:16 +0000 (0:00:03.173) 0:00:07.548 ********** 2026-04-10 00:55:39.252965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-10 00:55:39.252969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-10 00:55:39.252976 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:55:39.252983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-10 00:55:39.252990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-10 00:55:39.252995 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:55:39.252999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-10 00:55:39.253003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-10 00:55:39.253007 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:55:39.253014 | orchestrator | 2026-04-10 00:55:39.253018 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-04-10 00:55:39.253022 | orchestrator | Friday 10 April 2026 00:53:16 +0000 (0:00:00.641) 0:00:08.189 ********** 2026-04-10 00:55:39.253028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-10 00:55:39.253036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-10 00:55:39.253040 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:55:39.253044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-10 00:55:39.253048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-10 00:55:39.253058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-10 00:55:39.253063 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:55:39.253071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-10 00:55:39.253075 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:55:39.253079 | orchestrator | 2026-04-10 00:55:39.253083 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-04-10 00:55:39.253087 | orchestrator | Friday 10 April 2026 00:53:17 +0000 (0:00:00.765) 0:00:08.954 ********** 2026-04-10 00:55:39.253091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-10 00:55:39.253095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-10 00:55:39.253102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-10 00:55:39.253116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-10 00:55:39.253121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-10 00:55:39.253125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-10 00:55:39.253132 | orchestrator | 2026-04-10 00:55:39.253136 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-04-10 00:55:39.253140 | orchestrator | Friday 10 April 2026 00:53:20 +0000 (0:00:02.997) 0:00:11.952 ********** 2026-04-10 00:55:39.253144 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:55:39.253148 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:55:39.253152 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:55:39.253155 | orchestrator | 2026-04-10 00:55:39.253159 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-04-10 00:55:39.253163 | orchestrator | Friday 10 April 2026 00:53:23 +0000 (0:00:02.690) 0:00:14.642 ********** 2026-04-10 00:55:39.253167 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:55:39.253171 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:55:39.253174 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:55:39.253178 | orchestrator | 2026-04-10 00:55:39.253182 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-04-10 00:55:39.253186 | orchestrator | Friday 10 April 2026 00:53:25 +0000 (0:00:01.751) 0:00:16.394 ********** 2026-04-10 00:55:39.253193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-10 00:55:39.253216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-10 00:55:39.253222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-10 00:55:39.253226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-10 00:55:39.253236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-10 00:55:39.253244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-10 00:55:39.253248 | orchestrator | 2026-04-10 00:55:39.253252 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-10 00:55:39.253256 | orchestrator | Friday 10 April 2026 00:53:27 +0000 (0:00:02.201) 0:00:18.596 ********** 2026-04-10 00:55:39.253260 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:55:39.253264 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:55:39.253268 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:55:39.253273 | orchestrator | 2026-04-10 00:55:39.253279 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-10 00:55:39.253285 | orchestrator | Friday 10 April 2026 00:53:27 +0000 (0:00:00.562) 0:00:19.158 ********** 2026-04-10 00:55:39.253290 | orchestrator | 2026-04-10 00:55:39.253299 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-10 00:55:39.253306 | orchestrator | Friday 10 April 2026 00:53:27 +0000 (0:00:00.065) 0:00:19.224 ********** 2026-04-10 00:55:39.253314 | orchestrator | 2026-04-10 00:55:39.253319 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-10 00:55:39.253325 | orchestrator | Friday 10 April 2026 00:53:27 +0000 (0:00:00.093) 0:00:19.317 ********** 2026-04-10 00:55:39.253337 | orchestrator | 2026-04-10 00:55:39.253343 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-04-10 00:55:39.253348 | orchestrator | Friday 10 April 2026 00:53:28 +0000 (0:00:00.072) 0:00:19.390 ********** 2026-04-10 00:55:39.253355 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:55:39.253360 | orchestrator | 2026-04-10 00:55:39.253365 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-04-10 00:55:39.253371 | orchestrator | Friday 10 April 2026 00:53:28 +0000 (0:00:00.208) 0:00:19.598 ********** 2026-04-10 00:55:39.253376 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:55:39.253381 | orchestrator | 2026-04-10 00:55:39.253387 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-04-10 00:55:39.253392 | orchestrator | Friday 10 April 2026 00:53:28 +0000 (0:00:00.226) 0:00:19.825 ********** 2026-04-10 00:55:39.253397 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:55:39.253402 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:55:39.253407 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:55:39.253412 | orchestrator | 2026-04-10 00:55:39.253418 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-04-10 00:55:39.253423 | orchestrator | Friday 10 April 2026 00:54:22 +0000 (0:00:54.052) 0:01:13.877 ********** 2026-04-10 00:55:39.253430 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:55:39.253436 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:55:39.253442 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:55:39.253447 | orchestrator | 2026-04-10 00:55:39.253453 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-10 00:55:39.253459 | orchestrator | Friday 10 April 2026 00:55:24 +0000 (0:01:01.482) 0:02:15.360 ********** 2026-04-10 00:55:39.253466 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:55:39.253472 | orchestrator | 2026-04-10 00:55:39.253479 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-04-10 00:55:39.253485 | orchestrator | Friday 10 April 2026 00:55:24 +0000 (0:00:00.680) 0:02:16.040 ********** 2026-04-10 00:55:39.253492 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:55:39.253498 | orchestrator | 2026-04-10 00:55:39.253505 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-04-10 00:55:39.253509 | orchestrator | Friday 10 April 2026 00:55:27 +0000 (0:00:02.326) 0:02:18.367 ********** 2026-04-10 00:55:39.253514 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:55:39.253518 | orchestrator | 2026-04-10 00:55:39.253522 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-04-10 00:55:39.253527 | orchestrator | Friday 10 April 2026 00:55:29 +0000 (0:00:02.264) 0:02:20.632 ********** 2026-04-10 00:55:39.253531 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:55:39.253536 | orchestrator | 2026-04-10 00:55:39.253540 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-04-10 00:55:39.253544 | orchestrator | Friday 10 April 2026 00:55:31 +0000 (0:00:02.202) 0:02:22.835 ********** 2026-04-10 00:55:39.253549 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:55:39.253553 | orchestrator | 2026-04-10 00:55:39.253557 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-04-10 00:55:39.253565 | orchestrator | Friday 10 April 2026 00:55:34 +0000 (0:00:02.502) 0:02:25.337 ********** 2026-04-10 00:55:39.253570 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:55:39.253574 | orchestrator | 2026-04-10 00:55:39.253579 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:55:39.253586 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-10 00:55:39.253596 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-10 00:55:39.253615 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-10 00:55:39.253621 | orchestrator | 2026-04-10 00:55:39.253627 | orchestrator | 2026-04-10 00:55:39.253633 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:55:39.253639 | orchestrator | Friday 10 April 2026 00:55:36 +0000 (0:00:02.878) 0:02:28.215 ********** 2026-04-10 00:55:39.253644 | orchestrator | =============================================================================== 2026-04-10 00:55:39.253650 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 61.48s 2026-04-10 00:55:39.253656 | orchestrator | opensearch : Restart opensearch container ------------------------------ 54.05s 2026-04-10 00:55:39.253662 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.17s 2026-04-10 00:55:39.253667 | orchestrator | opensearch : Copying over config.json files for services ---------------- 3.00s 2026-04-10 00:55:39.253674 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.88s 2026-04-10 00:55:39.253680 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.69s 2026-04-10 00:55:39.253686 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.50s 2026-04-10 00:55:39.253692 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.33s 2026-04-10 00:55:39.253699 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.26s 2026-04-10 00:55:39.253705 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.20s 2026-04-10 00:55:39.253711 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.20s 2026-04-10 00:55:39.253717 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.75s 2026-04-10 00:55:39.253721 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.42s 2026-04-10 00:55:39.253725 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.01s 2026-04-10 00:55:39.253729 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.77s 2026-04-10 00:55:39.253733 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.68s 2026-04-10 00:55:39.253737 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.64s 2026-04-10 00:55:39.253740 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.56s 2026-04-10 00:55:39.253744 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.54s 2026-04-10 00:55:39.253748 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2026-04-10 00:55:39.254158 | orchestrator | 2026-04-10 00:55:39 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:55:39.254178 | orchestrator | 2026-04-10 00:55:39 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:55:42.306879 | orchestrator | 2026-04-10 00:55:42 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:55:42.308296 | orchestrator | 2026-04-10 00:55:42 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:55:42.308344 | orchestrator | 2026-04-10 00:55:42 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:55:45.348816 | orchestrator | 2026-04-10 00:55:45 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:55:45.350307 | orchestrator | 2026-04-10 00:55:45 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:55:45.350377 | orchestrator | 2026-04-10 00:55:45 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:55:48.395916 | orchestrator | 2026-04-10 00:55:48 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:55:48.397872 | orchestrator | 2026-04-10 00:55:48 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:55:48.397968 | orchestrator | 2026-04-10 00:55:48 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:55:51.442943 | orchestrator | 2026-04-10 00:55:51 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:55:51.444063 | orchestrator | 2026-04-10 00:55:51 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:55:51.444520 | orchestrator | 2026-04-10 00:55:51 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:55:54.488402 | orchestrator | 2026-04-10 00:55:54 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:55:54.489910 | orchestrator | 2026-04-10 00:55:54 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:55:54.489948 | orchestrator | 2026-04-10 00:55:54 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:55:57.530924 | orchestrator | 2026-04-10 00:55:57 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:55:57.532831 | orchestrator | 2026-04-10 00:55:57 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:55:57.532891 | orchestrator | 2026-04-10 00:55:57 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:56:00.580447 | orchestrator | 2026-04-10 00:56:00 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:56:00.582462 | orchestrator | 2026-04-10 00:56:00 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:56:00.582507 | orchestrator | 2026-04-10 00:56:00 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:56:03.619226 | orchestrator | 2026-04-10 00:56:03 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state STARTED 2026-04-10 00:56:03.620008 | orchestrator | 2026-04-10 00:56:03 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:56:03.620026 | orchestrator | 2026-04-10 00:56:03 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:56:06.665588 | orchestrator | 2026-04-10 00:56:06 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:56:06.668064 | orchestrator | 2026-04-10 00:56:06.668103 | orchestrator | 2026-04-10 00:56:06 | INFO  | Task c8a79c53-09b4-4f25-a610-2eff5bfc865a is in state SUCCESS 2026-04-10 00:56:06.669402 | orchestrator | 2026-04-10 00:56:06.669428 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-04-10 00:56:06.669433 | orchestrator | 2026-04-10 00:56:06.669437 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-10 00:56:06.669441 | orchestrator | Friday 10 April 2026 00:53:08 +0000 (0:00:00.097) 0:00:00.097 ********** 2026-04-10 00:56:06.669445 | orchestrator | ok: [localhost] => { 2026-04-10 00:56:06.669450 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-04-10 00:56:06.669454 | orchestrator | } 2026-04-10 00:56:06.669458 | orchestrator | 2026-04-10 00:56:06.669462 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-04-10 00:56:06.669466 | orchestrator | Friday 10 April 2026 00:53:08 +0000 (0:00:00.037) 0:00:00.135 ********** 2026-04-10 00:56:06.669470 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-04-10 00:56:06.669475 | orchestrator | ...ignoring 2026-04-10 00:56:06.669479 | orchestrator | 2026-04-10 00:56:06.669483 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-04-10 00:56:06.669487 | orchestrator | Friday 10 April 2026 00:53:11 +0000 (0:00:02.893) 0:00:03.028 ********** 2026-04-10 00:56:06.669491 | orchestrator | skipping: [localhost] 2026-04-10 00:56:06.669508 | orchestrator | 2026-04-10 00:56:06.669512 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-04-10 00:56:06.669516 | orchestrator | Friday 10 April 2026 00:53:11 +0000 (0:00:00.078) 0:00:03.106 ********** 2026-04-10 00:56:06.669520 | orchestrator | ok: [localhost] 2026-04-10 00:56:06.669524 | orchestrator | 2026-04-10 00:56:06.669528 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-10 00:56:06.669532 | orchestrator | 2026-04-10 00:56:06.669535 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-10 00:56:06.669539 | orchestrator | Friday 10 April 2026 00:53:12 +0000 (0:00:00.251) 0:00:03.358 ********** 2026-04-10 00:56:06.669543 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:56:06.669547 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:56:06.669550 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:56:06.669554 | orchestrator | 2026-04-10 00:56:06.669558 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-10 00:56:06.669562 | orchestrator | Friday 10 April 2026 00:53:12 +0000 (0:00:00.276) 0:00:03.635 ********** 2026-04-10 00:56:06.669566 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-10 00:56:06.669570 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-10 00:56:06.669574 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-10 00:56:06.669577 | orchestrator | 2026-04-10 00:56:06.669581 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-10 00:56:06.669585 | orchestrator | 2026-04-10 00:56:06.669589 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-10 00:56:06.669592 | orchestrator | Friday 10 April 2026 00:53:12 +0000 (0:00:00.389) 0:00:04.024 ********** 2026-04-10 00:56:06.669596 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-10 00:56:06.669600 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-10 00:56:06.669604 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-10 00:56:06.669608 | orchestrator | 2026-04-10 00:56:06.669612 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-10 00:56:06.669615 | orchestrator | Friday 10 April 2026 00:53:13 +0000 (0:00:00.349) 0:00:04.374 ********** 2026-04-10 00:56:06.669619 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:56:06.669623 | orchestrator | 2026-04-10 00:56:06.669633 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-04-10 00:56:06.669637 | orchestrator | Friday 10 April 2026 00:53:14 +0000 (0:00:00.904) 0:00:05.278 ********** 2026-04-10 00:56:06.669650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-10 00:56:06.669662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-10 00:56:06.669669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-10 00:56:06.669677 | orchestrator | 2026-04-10 00:56:06.669684 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-04-10 00:56:06.669688 | orchestrator | Friday 10 April 2026 00:53:17 +0000 (0:00:03.259) 0:00:08.537 ********** 2026-04-10 00:56:06.669692 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:56:06.669696 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:56:06.669701 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:56:06.669704 | orchestrator | 2026-04-10 00:56:06.669708 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-04-10 00:56:06.669712 | orchestrator | Friday 10 April 2026 00:53:17 +0000 (0:00:00.608) 0:00:09.146 ********** 2026-04-10 00:56:06.669716 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:56:06.669720 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:56:06.669723 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:56:06.669727 | orchestrator | 2026-04-10 00:56:06.669738 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-04-10 00:56:06.669746 | orchestrator | Friday 10 April 2026 00:53:19 +0000 (0:00:01.604) 0:00:10.751 ********** 2026-04-10 00:56:06.669751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-10 00:56:06.669760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-10 00:56:06.669767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-10 00:56:06.669772 | orchestrator | 2026-04-10 00:56:06.669776 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-04-10 00:56:06.669780 | orchestrator | Friday 10 April 2026 00:53:23 +0000 (0:00:03.937) 0:00:14.688 ********** 2026-04-10 00:56:06.669783 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:56:06.669787 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:56:06.669791 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:56:06.669795 | orchestrator | 2026-04-10 00:56:06.669799 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-04-10 00:56:06.669802 | orchestrator | Friday 10 April 2026 00:53:24 +0000 (0:00:01.309) 0:00:15.998 ********** 2026-04-10 00:56:06.669806 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:56:06.669810 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:56:06.669816 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:56:06.669819 | orchestrator | 2026-04-10 00:56:06.669823 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-10 00:56:06.669827 | orchestrator | Friday 10 April 2026 00:53:28 +0000 (0:00:04.116) 0:00:20.115 ********** 2026-04-10 00:56:06.669831 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:56:06.669835 | orchestrator | 2026-04-10 00:56:06.669838 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-10 00:56:06.669842 | orchestrator | Friday 10 April 2026 00:53:29 +0000 (0:00:00.569) 0:00:20.685 ********** 2026-04-10 00:56:06.669852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-10 00:56:06.669856 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:56:06.669863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-10 00:56:06.669867 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:56:06.669874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-10 00:56:06.669881 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:56:06.669885 | orchestrator | 2026-04-10 00:56:06.669889 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-10 00:56:06.669893 | orchestrator | Friday 10 April 2026 00:53:32 +0000 (0:00:03.031) 0:00:23.717 ********** 2026-04-10 00:56:06.669897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-10 00:56:06.669901 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:56:06.669909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-10 00:56:06.669916 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:56:06.669920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-10 00:56:06.669924 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:56:06.669928 | orchestrator | 2026-04-10 00:56:06.669932 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-10 00:56:06.669936 | orchestrator | Friday 10 April 2026 00:53:34 +0000 (0:00:02.375) 0:00:26.092 ********** 2026-04-10 00:56:06.669942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-10 00:56:06.669952 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:56:06.669959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-10 00:56:06.669963 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:56:06.669969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-10 00:56:06.669976 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:56:06.670041 | orchestrator | 2026-04-10 00:56:06.670048 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-04-10 00:56:06.670052 | orchestrator | Friday 10 April 2026 00:53:38 +0000 (0:00:03.446) 0:00:29.538 ********** 2026-04-10 00:56:06.670061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-10 00:56:06.670076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-10 00:56:06.670087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-10 00:56:06.670092 | orchestrator | 2026-04-10 00:56:06.670096 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-04-10 00:56:06.670100 | orchestrator | Friday 10 April 2026 00:53:41 +0000 (0:00:03.720) 0:00:33.259 ********** 2026-04-10 00:56:06.670104 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:56:06.670108 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:56:06.670112 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:56:06.670115 | orchestrator | 2026-04-10 00:56:06.670119 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-04-10 00:56:06.670123 | orchestrator | Friday 10 April 2026 00:53:42 +0000 (0:00:00.923) 0:00:34.183 ********** 2026-04-10 00:56:06.670127 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:56:06.670131 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:56:06.670135 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:56:06.670139 | orchestrator | 2026-04-10 00:56:06.670143 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-04-10 00:56:06.670149 | orchestrator | Friday 10 April 2026 00:53:43 +0000 (0:00:00.323) 0:00:34.506 ********** 2026-04-10 00:56:06.670153 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:56:06.670157 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:56:06.670161 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:56:06.670165 | orchestrator | 2026-04-10 00:56:06.670185 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-04-10 00:56:06.670191 | orchestrator | Friday 10 April 2026 00:53:43 +0000 (0:00:00.308) 0:00:34.814 ********** 2026-04-10 00:56:06.670199 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-04-10 00:56:06.670205 | orchestrator | ...ignoring 2026-04-10 00:56:06.670212 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-04-10 00:56:06.670217 | orchestrator | ...ignoring 2026-04-10 00:56:06.670224 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-04-10 00:56:06.670232 | orchestrator | ...ignoring 2026-04-10 00:56:06.670238 | orchestrator | 2026-04-10 00:56:06.670244 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-04-10 00:56:06.670249 | orchestrator | Friday 10 April 2026 00:53:54 +0000 (0:00:11.220) 0:00:46.034 ********** 2026-04-10 00:56:06.670255 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:56:06.670261 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:56:06.670271 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:56:06.670278 | orchestrator | 2026-04-10 00:56:06.670284 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-04-10 00:56:06.670290 | orchestrator | Friday 10 April 2026 00:53:55 +0000 (0:00:00.457) 0:00:46.491 ********** 2026-04-10 00:56:06.670296 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:56:06.670302 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:56:06.670308 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:56:06.670314 | orchestrator | 2026-04-10 00:56:06.670320 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-04-10 00:56:06.670326 | orchestrator | Friday 10 April 2026 00:53:55 +0000 (0:00:00.423) 0:00:46.915 ********** 2026-04-10 00:56:06.670333 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:56:06.670340 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:56:06.670346 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:56:06.670353 | orchestrator | 2026-04-10 00:56:06.670359 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-04-10 00:56:06.670363 | orchestrator | Friday 10 April 2026 00:53:56 +0000 (0:00:00.387) 0:00:47.303 ********** 2026-04-10 00:56:06.670367 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:56:06.670371 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:56:06.670375 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:56:06.670378 | orchestrator | 2026-04-10 00:56:06.670382 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-04-10 00:56:06.670386 | orchestrator | Friday 10 April 2026 00:53:56 +0000 (0:00:00.900) 0:00:48.203 ********** 2026-04-10 00:56:06.670390 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:56:06.670393 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:56:06.670397 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:56:06.670401 | orchestrator | 2026-04-10 00:56:06.670405 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-04-10 00:56:06.670408 | orchestrator | Friday 10 April 2026 00:53:57 +0000 (0:00:00.436) 0:00:48.639 ********** 2026-04-10 00:56:06.670416 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:56:06.670420 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:56:06.670424 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:56:06.670428 | orchestrator | 2026-04-10 00:56:06.670431 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-10 00:56:06.670440 | orchestrator | Friday 10 April 2026 00:53:57 +0000 (0:00:00.448) 0:00:49.088 ********** 2026-04-10 00:56:06.670444 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:56:06.670447 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:56:06.670451 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-04-10 00:56:06.670455 | orchestrator | 2026-04-10 00:56:06.670459 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-04-10 00:56:06.670463 | orchestrator | Friday 10 April 2026 00:53:58 +0000 (0:00:00.385) 0:00:49.473 ********** 2026-04-10 00:56:06.670466 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:56:06.670470 | orchestrator | 2026-04-10 00:56:06.670474 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-04-10 00:56:06.670477 | orchestrator | Friday 10 April 2026 00:54:08 +0000 (0:00:10.590) 0:01:00.063 ********** 2026-04-10 00:56:06.670481 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:56:06.670485 | orchestrator | 2026-04-10 00:56:06.670489 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-10 00:56:06.670492 | orchestrator | Friday 10 April 2026 00:54:09 +0000 (0:00:00.223) 0:01:00.287 ********** 2026-04-10 00:56:06.670496 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:56:06.670500 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:56:06.670503 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:56:06.670507 | orchestrator | 2026-04-10 00:56:06.670511 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-04-10 00:56:06.670515 | orchestrator | Friday 10 April 2026 00:54:09 +0000 (0:00:00.770) 0:01:01.057 ********** 2026-04-10 00:56:06.670518 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:56:06.670522 | orchestrator | 2026-04-10 00:56:06.670526 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-04-10 00:56:06.670529 | orchestrator | Friday 10 April 2026 00:54:17 +0000 (0:00:07.325) 0:01:08.383 ********** 2026-04-10 00:56:06.670533 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:56:06.670537 | orchestrator | 2026-04-10 00:56:06.670541 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-04-10 00:56:06.670544 | orchestrator | Friday 10 April 2026 00:54:18 +0000 (0:00:01.603) 0:01:09.986 ********** 2026-04-10 00:56:06.670548 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:56:06.670552 | orchestrator | 2026-04-10 00:56:06.670555 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-04-10 00:56:06.670559 | orchestrator | Friday 10 April 2026 00:54:21 +0000 (0:00:02.577) 0:01:12.564 ********** 2026-04-10 00:56:06.670563 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:56:06.670567 | orchestrator | 2026-04-10 00:56:06.670570 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-04-10 00:56:06.670574 | orchestrator | Friday 10 April 2026 00:54:21 +0000 (0:00:00.141) 0:01:12.705 ********** 2026-04-10 00:56:06.670578 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:56:06.670582 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:56:06.670585 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:56:06.670589 | orchestrator | 2026-04-10 00:56:06.670593 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-04-10 00:56:06.670597 | orchestrator | Friday 10 April 2026 00:54:21 +0000 (0:00:00.314) 0:01:13.020 ********** 2026-04-10 00:56:06.670601 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:56:06.670604 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:56:06.670608 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:56:06.670612 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-10 00:56:06.670616 | orchestrator | 2026-04-10 00:56:06.670622 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-10 00:56:06.670626 | orchestrator | skipping: no hosts matched 2026-04-10 00:56:06.670630 | orchestrator | 2026-04-10 00:56:06.670634 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-10 00:56:06.670638 | orchestrator | 2026-04-10 00:56:06.670644 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-10 00:56:06.670648 | orchestrator | Friday 10 April 2026 00:54:22 +0000 (0:00:00.343) 0:01:13.363 ********** 2026-04-10 00:56:06.670651 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:56:06.670655 | orchestrator | 2026-04-10 00:56:06.670659 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-10 00:56:06.670663 | orchestrator | Friday 10 April 2026 00:54:39 +0000 (0:00:17.742) 0:01:31.106 ********** 2026-04-10 00:56:06.670666 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:56:06.670670 | orchestrator | 2026-04-10 00:56:06.670674 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-10 00:56:06.670677 | orchestrator | Friday 10 April 2026 00:54:55 +0000 (0:00:15.589) 0:01:46.695 ********** 2026-04-10 00:56:06.670681 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:56:06.670685 | orchestrator | 2026-04-10 00:56:06.670688 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-10 00:56:06.670692 | orchestrator | 2026-04-10 00:56:06.670696 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-10 00:56:06.670700 | orchestrator | Friday 10 April 2026 00:54:57 +0000 (0:00:02.358) 0:01:49.054 ********** 2026-04-10 00:56:06.670703 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:56:06.670707 | orchestrator | 2026-04-10 00:56:06.670711 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-10 00:56:06.670715 | orchestrator | Friday 10 April 2026 00:55:14 +0000 (0:00:16.793) 0:02:05.847 ********** 2026-04-10 00:56:06.670718 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Wait for MariaDB service port liveness (10 retries left). 2026-04-10 00:56:06.670723 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:56:06.670728 | orchestrator | 2026-04-10 00:56:06.670734 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-10 00:56:06.670742 | orchestrator | Friday 10 April 2026 00:55:30 +0000 (0:00:15.785) 0:02:21.633 ********** 2026-04-10 00:56:06.670754 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:56:06.670761 | orchestrator | 2026-04-10 00:56:06.670766 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-10 00:56:06.670773 | orchestrator | 2026-04-10 00:56:06.670778 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-10 00:56:06.670784 | orchestrator | Friday 10 April 2026 00:55:32 +0000 (0:00:02.281) 0:02:23.915 ********** 2026-04-10 00:56:06.670790 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:56:06.670795 | orchestrator | 2026-04-10 00:56:06.670801 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-10 00:56:06.670807 | orchestrator | Friday 10 April 2026 00:55:43 +0000 (0:00:11.304) 0:02:35.219 ********** 2026-04-10 00:56:06.670812 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:56:06.670819 | orchestrator | 2026-04-10 00:56:06.670825 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-10 00:56:06.670831 | orchestrator | Friday 10 April 2026 00:55:48 +0000 (0:00:04.607) 0:02:39.826 ********** 2026-04-10 00:56:06.670837 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:56:06.670843 | orchestrator | 2026-04-10 00:56:06.670849 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-10 00:56:06.670856 | orchestrator | 2026-04-10 00:56:06.670862 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-10 00:56:06.670868 | orchestrator | Friday 10 April 2026 00:55:50 +0000 (0:00:02.390) 0:02:42.217 ********** 2026-04-10 00:56:06.670875 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:56:06.670879 | orchestrator | 2026-04-10 00:56:06.670883 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-04-10 00:56:06.670886 | orchestrator | Friday 10 April 2026 00:55:51 +0000 (0:00:00.625) 0:02:42.842 ********** 2026-04-10 00:56:06.670890 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:56:06.670894 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:56:06.670902 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:56:06.670906 | orchestrator | 2026-04-10 00:56:06.670910 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-04-10 00:56:06.670914 | orchestrator | Friday 10 April 2026 00:55:54 +0000 (0:00:02.623) 0:02:45.465 ********** 2026-04-10 00:56:06.670917 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:56:06.670921 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:56:06.670925 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:56:06.670929 | orchestrator | 2026-04-10 00:56:06.670932 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-04-10 00:56:06.670936 | orchestrator | Friday 10 April 2026 00:55:56 +0000 (0:00:02.114) 0:02:47.580 ********** 2026-04-10 00:56:06.670940 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:56:06.670944 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:56:06.670948 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:56:06.670951 | orchestrator | 2026-04-10 00:56:06.670955 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-04-10 00:56:06.670959 | orchestrator | Friday 10 April 2026 00:55:58 +0000 (0:00:02.579) 0:02:50.159 ********** 2026-04-10 00:56:06.670963 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:56:06.670967 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:56:06.670970 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:56:06.670974 | orchestrator | 2026-04-10 00:56:06.670978 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-10 00:56:06.670982 | orchestrator | Friday 10 April 2026 00:56:01 +0000 (0:00:02.488) 0:02:52.648 ********** 2026-04-10 00:56:06.670985 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:56:06.670989 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:56:06.670993 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:56:06.670997 | orchestrator | 2026-04-10 00:56:06.671001 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-10 00:56:06.671005 | orchestrator | Friday 10 April 2026 00:56:03 +0000 (0:00:02.521) 0:02:55.169 ********** 2026-04-10 00:56:06.671011 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:56:06.671015 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:56:06.671019 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:56:06.671023 | orchestrator | 2026-04-10 00:56:06.671027 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:56:06.671031 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-10 00:56:06.671035 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-04-10 00:56:06.671040 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-04-10 00:56:06.671043 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-04-10 00:56:06.671047 | orchestrator | 2026-04-10 00:56:06.671051 | orchestrator | 2026-04-10 00:56:06.671055 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:56:06.671059 | orchestrator | Friday 10 April 2026 00:56:04 +0000 (0:00:00.217) 0:02:55.387 ********** 2026-04-10 00:56:06.671063 | orchestrator | =============================================================================== 2026-04-10 00:56:06.671067 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 34.54s 2026-04-10 00:56:06.671070 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 31.38s 2026-04-10 00:56:06.671074 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.30s 2026-04-10 00:56:06.671078 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.22s 2026-04-10 00:56:06.671082 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.59s 2026-04-10 00:56:06.671092 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.33s 2026-04-10 00:56:06.671096 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.64s 2026-04-10 00:56:06.671100 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.61s 2026-04-10 00:56:06.671104 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.12s 2026-04-10 00:56:06.671107 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.94s 2026-04-10 00:56:06.671111 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.72s 2026-04-10 00:56:06.671115 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.45s 2026-04-10 00:56:06.671119 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.26s 2026-04-10 00:56:06.671123 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.03s 2026-04-10 00:56:06.671126 | orchestrator | Check MariaDB service --------------------------------------------------- 2.89s 2026-04-10 00:56:06.671130 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.62s 2026-04-10 00:56:06.671134 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.58s 2026-04-10 00:56:06.671138 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.58s 2026-04-10 00:56:06.671142 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.52s 2026-04-10 00:56:06.671145 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.49s 2026-04-10 00:56:06.671149 | orchestrator | 2026-04-10 00:56:06 | INFO  | Task 17d36447-3d0d-4f9b-b57e-fb93c293f26a is in state STARTED 2026-04-10 00:56:06.676796 | orchestrator | 2026-04-10 00:56:06 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:56:06.676846 | orchestrator | 2026-04-10 00:56:06 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:56:09.719032 | orchestrator | 2026-04-10 00:56:09 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:56:09.719390 | orchestrator | 2026-04-10 00:56:09 | INFO  | Task 17d36447-3d0d-4f9b-b57e-fb93c293f26a is in state STARTED 2026-04-10 00:56:09.723252 | orchestrator | 2026-04-10 00:56:09 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:56:09.723294 | orchestrator | 2026-04-10 00:56:09 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:56:12.769359 | orchestrator | 2026-04-10 00:56:12 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:56:12.769531 | orchestrator | 2026-04-10 00:56:12 | INFO  | Task 17d36447-3d0d-4f9b-b57e-fb93c293f26a is in state STARTED 2026-04-10 00:56:12.769557 | orchestrator | 2026-04-10 00:56:12 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:56:12.769566 | orchestrator | 2026-04-10 00:56:12 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:56:15.818665 | orchestrator | 2026-04-10 00:56:15 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:56:15.819500 | orchestrator | 2026-04-10 00:56:15 | INFO  | Task 17d36447-3d0d-4f9b-b57e-fb93c293f26a is in state STARTED 2026-04-10 00:56:15.822762 | orchestrator | 2026-04-10 00:56:15 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:56:15.822840 | orchestrator | 2026-04-10 00:56:15 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:56:18.848781 | orchestrator | 2026-04-10 00:56:18 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:56:18.848831 | orchestrator | 2026-04-10 00:56:18 | INFO  | Task 17d36447-3d0d-4f9b-b57e-fb93c293f26a is in state STARTED 2026-04-10 00:56:18.849290 | orchestrator | 2026-04-10 00:56:18 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:56:18.849303 | orchestrator | 2026-04-10 00:56:18 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:56:21.875585 | orchestrator | 2026-04-10 00:56:21 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:56:21.875642 | orchestrator | 2026-04-10 00:56:21 | INFO  | Task 17d36447-3d0d-4f9b-b57e-fb93c293f26a is in state STARTED 2026-04-10 00:56:21.878303 | orchestrator | 2026-04-10 00:56:21 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:56:21.878378 | orchestrator | 2026-04-10 00:56:21 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:56:24.918995 | orchestrator | 2026-04-10 00:56:24 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:56:24.922390 | orchestrator | 2026-04-10 00:56:24 | INFO  | Task 17d36447-3d0d-4f9b-b57e-fb93c293f26a is in state STARTED 2026-04-10 00:56:24.925026 | orchestrator | 2026-04-10 00:56:24 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:56:24.925454 | orchestrator | 2026-04-10 00:56:24 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:56:27.977308 | orchestrator | 2026-04-10 00:56:27 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:56:27.978478 | orchestrator | 2026-04-10 00:56:27 | INFO  | Task 17d36447-3d0d-4f9b-b57e-fb93c293f26a is in state STARTED 2026-04-10 00:56:27.979521 | orchestrator | 2026-04-10 00:56:27 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:56:27.979856 | orchestrator | 2026-04-10 00:56:27 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:56:31.021391 | orchestrator | 2026-04-10 00:56:31 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:56:31.021487 | orchestrator | 2026-04-10 00:56:31 | INFO  | Task 17d36447-3d0d-4f9b-b57e-fb93c293f26a is in state STARTED 2026-04-10 00:56:31.021496 | orchestrator | 2026-04-10 00:56:31 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:56:31.021503 | orchestrator | 2026-04-10 00:56:31 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:56:34.063817 | orchestrator | 2026-04-10 00:56:34 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:56:34.064093 | orchestrator | 2026-04-10 00:56:34 | INFO  | Task 17d36447-3d0d-4f9b-b57e-fb93c293f26a is in state STARTED 2026-04-10 00:56:34.064891 | orchestrator | 2026-04-10 00:56:34 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:56:34.064920 | orchestrator | 2026-04-10 00:56:34 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:56:37.098388 | orchestrator | 2026-04-10 00:56:37 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:56:37.098465 | orchestrator | 2026-04-10 00:56:37 | INFO  | Task 17d36447-3d0d-4f9b-b57e-fb93c293f26a is in state STARTED 2026-04-10 00:56:37.098842 | orchestrator | 2026-04-10 00:56:37 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:56:37.098900 | orchestrator | 2026-04-10 00:56:37 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:56:40.134451 | orchestrator | 2026-04-10 00:56:40 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:56:40.137581 | orchestrator | 2026-04-10 00:56:40 | INFO  | Task 17d36447-3d0d-4f9b-b57e-fb93c293f26a is in state STARTED 2026-04-10 00:56:40.137662 | orchestrator | 2026-04-10 00:56:40 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:56:40.137708 | orchestrator | 2026-04-10 00:56:40 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:56:43.182722 | orchestrator | 2026-04-10 00:56:43 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:56:43.183208 | orchestrator | 2026-04-10 00:56:43 | INFO  | Task 17d36447-3d0d-4f9b-b57e-fb93c293f26a is in state STARTED 2026-04-10 00:56:43.184827 | orchestrator | 2026-04-10 00:56:43 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:56:43.184878 | orchestrator | 2026-04-10 00:56:43 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:56:46.218914 | orchestrator | 2026-04-10 00:56:46 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:56:46.221011 | orchestrator | 2026-04-10 00:56:46 | INFO  | Task 17d36447-3d0d-4f9b-b57e-fb93c293f26a is in state STARTED 2026-04-10 00:56:46.222418 | orchestrator | 2026-04-10 00:56:46 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:56:46.222440 | orchestrator | 2026-04-10 00:56:46 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:56:49.262706 | orchestrator | 2026-04-10 00:56:49 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:56:49.264267 | orchestrator | 2026-04-10 00:56:49 | INFO  | Task 17d36447-3d0d-4f9b-b57e-fb93c293f26a is in state STARTED 2026-04-10 00:56:49.265605 | orchestrator | 2026-04-10 00:56:49 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state STARTED 2026-04-10 00:56:49.265760 | orchestrator | 2026-04-10 00:56:49 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:56:52.301853 | orchestrator | 2026-04-10 00:56:52 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:56:52.303769 | orchestrator | 2026-04-10 00:56:52 | INFO  | Task b1efc256-bd99-4e5c-8ff5-bf62cca1eb74 is in state STARTED 2026-04-10 00:56:52.305218 | orchestrator | 2026-04-10 00:56:52 | INFO  | Task 17d36447-3d0d-4f9b-b57e-fb93c293f26a is in state STARTED 2026-04-10 00:56:52.308753 | orchestrator | 2026-04-10 00:56:52 | INFO  | Task 04c42624-951b-4ed7-ab6a-695fe0b11038 is in state SUCCESS 2026-04-10 00:56:52.309799 | orchestrator | 2026-04-10 00:56:52.309886 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-10 00:56:52.309908 | orchestrator | 2.16.14 2026-04-10 00:56:52.309915 | orchestrator | 2026-04-10 00:56:52.309922 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-04-10 00:56:52.309929 | orchestrator | 2026-04-10 00:56:52.309935 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-10 00:56:52.309942 | orchestrator | Friday 10 April 2026 00:54:57 +0000 (0:00:00.532) 0:00:00.532 ********** 2026-04-10 00:56:52.309948 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:56:52.309953 | orchestrator | 2026-04-10 00:56:52.309957 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-10 00:56:52.309962 | orchestrator | Friday 10 April 2026 00:54:58 +0000 (0:00:00.619) 0:00:01.152 ********** 2026-04-10 00:56:52.309966 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:56:52.309970 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:56:52.309974 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:56:52.309978 | orchestrator | 2026-04-10 00:56:52.309982 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-10 00:56:52.309985 | orchestrator | Friday 10 April 2026 00:54:59 +0000 (0:00:00.989) 0:00:02.141 ********** 2026-04-10 00:56:52.309989 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:56:52.309993 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:56:52.309996 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:56:52.310046 | orchestrator | 2026-04-10 00:56:52.310051 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-10 00:56:52.310055 | orchestrator | Friday 10 April 2026 00:54:59 +0000 (0:00:00.265) 0:00:02.407 ********** 2026-04-10 00:56:52.310059 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:56:52.310062 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:56:52.310066 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:56:52.310070 | orchestrator | 2026-04-10 00:56:52.310074 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-10 00:56:52.310078 | orchestrator | Friday 10 April 2026 00:55:00 +0000 (0:00:00.807) 0:00:03.214 ********** 2026-04-10 00:56:52.310082 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:56:52.310086 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:56:52.310090 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:56:52.310093 | orchestrator | 2026-04-10 00:56:52.310097 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-10 00:56:52.310101 | orchestrator | Friday 10 April 2026 00:55:00 +0000 (0:00:00.298) 0:00:03.512 ********** 2026-04-10 00:56:52.310105 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:56:52.310109 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:56:52.310112 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:56:52.310134 | orchestrator | 2026-04-10 00:56:52.310139 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-10 00:56:52.310143 | orchestrator | Friday 10 April 2026 00:55:01 +0000 (0:00:00.282) 0:00:03.795 ********** 2026-04-10 00:56:52.310147 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:56:52.310151 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:56:52.310155 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:56:52.310158 | orchestrator | 2026-04-10 00:56:52.310162 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-10 00:56:52.310166 | orchestrator | Friday 10 April 2026 00:55:01 +0000 (0:00:00.317) 0:00:04.113 ********** 2026-04-10 00:56:52.310170 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:56:52.310175 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:56:52.310179 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:56:52.310183 | orchestrator | 2026-04-10 00:56:52.310187 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-10 00:56:52.310202 | orchestrator | Friday 10 April 2026 00:55:02 +0000 (0:00:00.502) 0:00:04.615 ********** 2026-04-10 00:56:52.310206 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:56:52.310209 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:56:52.310213 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:56:52.310217 | orchestrator | 2026-04-10 00:56:52.310220 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-10 00:56:52.310225 | orchestrator | Friday 10 April 2026 00:55:02 +0000 (0:00:00.285) 0:00:04.900 ********** 2026-04-10 00:56:52.310229 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-10 00:56:52.310233 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-10 00:56:52.310237 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-10 00:56:52.310240 | orchestrator | 2026-04-10 00:56:52.310244 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-10 00:56:52.310248 | orchestrator | Friday 10 April 2026 00:55:02 +0000 (0:00:00.634) 0:00:05.535 ********** 2026-04-10 00:56:52.310252 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:56:52.310541 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:56:52.310549 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:56:52.310552 | orchestrator | 2026-04-10 00:56:52.310557 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-10 00:56:52.310561 | orchestrator | Friday 10 April 2026 00:55:03 +0000 (0:00:00.402) 0:00:05.938 ********** 2026-04-10 00:56:52.310565 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-10 00:56:52.310569 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-10 00:56:52.310582 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-10 00:56:52.310585 | orchestrator | 2026-04-10 00:56:52.310589 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-10 00:56:52.310593 | orchestrator | Friday 10 April 2026 00:55:06 +0000 (0:00:03.013) 0:00:08.951 ********** 2026-04-10 00:56:52.310597 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-10 00:56:52.310601 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-10 00:56:52.310605 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-10 00:56:52.310610 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:56:52.310613 | orchestrator | 2026-04-10 00:56:52.310625 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-10 00:56:52.310629 | orchestrator | Friday 10 April 2026 00:55:06 +0000 (0:00:00.357) 0:00:09.309 ********** 2026-04-10 00:56:52.310634 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.310641 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.310645 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.310649 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:56:52.310653 | orchestrator | 2026-04-10 00:56:52.310657 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-10 00:56:52.310661 | orchestrator | Friday 10 April 2026 00:55:07 +0000 (0:00:00.672) 0:00:09.982 ********** 2026-04-10 00:56:52.310666 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.310673 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.310677 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.310681 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:56:52.310685 | orchestrator | 2026-04-10 00:56:52.310694 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-10 00:56:52.310698 | orchestrator | Friday 10 April 2026 00:55:07 +0000 (0:00:00.138) 0:00:10.120 ********** 2026-04-10 00:56:52.310704 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '5e279f46fe2c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-10 00:55:04.337436', 'end': '2026-04-10 00:55:04.377421', 'delta': '0:00:00.039985', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5e279f46fe2c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-10 00:56:52.310714 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'fbe9cee944e3', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-10 00:55:05.343000', 'end': '2026-04-10 00:55:05.388231', 'delta': '0:00:00.045231', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fbe9cee944e3'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-10 00:56:52.310724 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '277ac8566ad1', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-10 00:55:06.215544', 'end': '2026-04-10 00:55:06.239471', 'delta': '0:00:00.023927', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['277ac8566ad1'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-10 00:56:52.310728 | orchestrator | 2026-04-10 00:56:52.310732 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-10 00:56:52.310736 | orchestrator | Friday 10 April 2026 00:55:07 +0000 (0:00:00.307) 0:00:10.428 ********** 2026-04-10 00:56:52.310739 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:56:52.310743 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:56:52.310747 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:56:52.310751 | orchestrator | 2026-04-10 00:56:52.310754 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-10 00:56:52.310758 | orchestrator | Friday 10 April 2026 00:55:08 +0000 (0:00:00.398) 0:00:10.826 ********** 2026-04-10 00:56:52.310762 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-10 00:56:52.310766 | orchestrator | 2026-04-10 00:56:52.310770 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-10 00:56:52.310773 | orchestrator | Friday 10 April 2026 00:55:09 +0000 (0:00:01.389) 0:00:12.215 ********** 2026-04-10 00:56:52.310777 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:56:52.310798 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:56:52.310802 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:56:52.310806 | orchestrator | 2026-04-10 00:56:52.310810 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-10 00:56:52.310814 | orchestrator | Friday 10 April 2026 00:55:09 +0000 (0:00:00.292) 0:00:12.508 ********** 2026-04-10 00:56:52.310817 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:56:52.310821 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:56:52.310825 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:56:52.310829 | orchestrator | 2026-04-10 00:56:52.310833 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-10 00:56:52.310836 | orchestrator | Friday 10 April 2026 00:55:10 +0000 (0:00:00.413) 0:00:12.922 ********** 2026-04-10 00:56:52.310840 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:56:52.310855 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:56:52.310859 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:56:52.310863 | orchestrator | 2026-04-10 00:56:52.310867 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-10 00:56:52.310871 | orchestrator | Friday 10 April 2026 00:55:10 +0000 (0:00:00.464) 0:00:13.387 ********** 2026-04-10 00:56:52.310914 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:56:52.310919 | orchestrator | 2026-04-10 00:56:52.310923 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-10 00:56:52.310930 | orchestrator | Friday 10 April 2026 00:55:10 +0000 (0:00:00.127) 0:00:13.515 ********** 2026-04-10 00:56:52.310934 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:56:52.310937 | orchestrator | 2026-04-10 00:56:52.310941 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-10 00:56:52.310945 | orchestrator | Friday 10 April 2026 00:55:11 +0000 (0:00:00.237) 0:00:13.753 ********** 2026-04-10 00:56:52.310949 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:56:52.310953 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:56:52.310956 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:56:52.310960 | orchestrator | 2026-04-10 00:56:52.310964 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-10 00:56:52.310968 | orchestrator | Friday 10 April 2026 00:55:11 +0000 (0:00:00.257) 0:00:14.010 ********** 2026-04-10 00:56:52.310971 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:56:52.310975 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:56:52.310979 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:56:52.310983 | orchestrator | 2026-04-10 00:56:52.310986 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-10 00:56:52.310990 | orchestrator | Friday 10 April 2026 00:55:11 +0000 (0:00:00.316) 0:00:14.327 ********** 2026-04-10 00:56:52.310994 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:56:52.310997 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:56:52.311001 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:56:52.311005 | orchestrator | 2026-04-10 00:56:52.311009 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-10 00:56:52.311012 | orchestrator | Friday 10 April 2026 00:55:12 +0000 (0:00:00.501) 0:00:14.829 ********** 2026-04-10 00:56:52.311016 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:56:52.311020 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:56:52.311024 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:56:52.311027 | orchestrator | 2026-04-10 00:56:52.311031 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-10 00:56:52.311035 | orchestrator | Friday 10 April 2026 00:55:12 +0000 (0:00:00.303) 0:00:15.133 ********** 2026-04-10 00:56:52.311038 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:56:52.311042 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:56:52.311169 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:56:52.311174 | orchestrator | 2026-04-10 00:56:52.311178 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-10 00:56:52.311182 | orchestrator | Friday 10 April 2026 00:55:12 +0000 (0:00:00.303) 0:00:15.437 ********** 2026-04-10 00:56:52.311186 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:56:52.311190 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:56:52.311194 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:56:52.311208 | orchestrator | 2026-04-10 00:56:52.311213 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-10 00:56:52.311217 | orchestrator | Friday 10 April 2026 00:55:13 +0000 (0:00:00.316) 0:00:15.753 ********** 2026-04-10 00:56:52.311220 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:56:52.311224 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:56:52.311228 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:56:52.311232 | orchestrator | 2026-04-10 00:56:52.311235 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-10 00:56:52.311244 | orchestrator | Friday 10 April 2026 00:55:13 +0000 (0:00:00.497) 0:00:16.251 ********** 2026-04-10 00:56:52.311249 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4a24d887--4b45--578e--8445--fe6f68cb2659-osd--block--4a24d887--4b45--578e--8445--fe6f68cb2659', 'dm-uuid-LVM-HmBRIWxGLI3EGV6kV75sVNxgbPSB5omXHrIQIzDei9cb7WRNNAqcgK7AytWK3YKZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-10 00:56:52.311254 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--83f5954c--7956--54fb--af17--18f84b92edf0-osd--block--83f5954c--7956--54fb--af17--18f84b92edf0', 'dm-uuid-LVM-8Osw97PfL7yOFGzOX4qgZueyeAhWhOmOkCSzx6ohrwri6Ap1yw3bOZM3asUyFbv6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-10 00:56:52.311259 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:56:52.311267 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:56:52.311271 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:56:52.311275 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:56:52.311279 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:56:52.311293 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:56:52.311301 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:56:52.311305 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:56:52.311309 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--465b2d07--90ab--575b--b156--9a24eede9b64-osd--block--465b2d07--90ab--575b--b156--9a24eede9b64', 'dm-uuid-LVM-sz21BL9rKXHUXi7MHvzuiuEYOO4GuVHIzP8DshAgnCVBbJYANYokb9PpLuHhy1UX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-10 00:56:52.311318 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae', 'scsi-SQEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae-part1', 'scsi-SQEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae-part14', 'scsi-SQEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae-part15', 'scsi-SQEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae-part16', 'scsi-SQEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-10 00:56:52.311332 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a684d377--5ec1--594b--83a4--e92528b1ce81-osd--block--a684d377--5ec1--594b--83a4--e92528b1ce81', 'dm-uuid-LVM-bLjMtbKkcMY1XBDWcSo4rp9t2ScEoyS6X4oShYcxTkNtN21H8kUDn4qODgM2cnva'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-10 00:56:52.311340 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4a24d887--4b45--578e--8445--fe6f68cb2659-osd--block--4a24d887--4b45--578e--8445--fe6f68cb2659'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JP9aDr-LzDf-aWue-EhD0-vBcD-llKo-fbqbH0', 'scsi-0QEMU_QEMU_HARDDISK_7df1152f-d9d4-4643-860e-92853d20f14a', 'scsi-SQEMU_QEMU_HARDDISK_7df1152f-d9d4-4643-860e-92853d20f14a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-10 00:56:52.311345 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:56:52.311349 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--83f5954c--7956--54fb--af17--18f84b92edf0-osd--block--83f5954c--7956--54fb--af17--18f84b92edf0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NoFq0I-grCm-XrVi-NRfm-Ddwc-OPpb-h3TY7p', 'scsi-0QEMU_QEMU_HARDDISK_c799235e-1f4d-413e-847e-76a649e6822e', 'scsi-SQEMU_QEMU_HARDDISK_c799235e-1f4d-413e-847e-76a649e6822e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-10 00:56:52.311355 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:56:52.311360 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_83ad9ae7-b217-4c7b-97e6-a7d535a7d755', 'scsi-SQEMU_QEMU_HARDDISK_83ad9ae7-b217-4c7b-97e6-a7d535a7d755'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-10 00:56:52.311365 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:56:52.311378 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-10-00-03-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-10 00:56:52.311385 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:56:52.311390 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:56:52.311394 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:56:52.311397 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:56:52.311401 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:56:52.311406 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:56:52.311434 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762', 'scsi-SQEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762-part1', 'scsi-SQEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762-part14', 'scsi-SQEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762-part15', 'scsi-SQEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762-part16', 'scsi-SQEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-10 00:56:52.311453 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--465b2d07--90ab--575b--b156--9a24eede9b64-osd--block--465b2d07--90ab--575b--b156--9a24eede9b64'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TLAoeq-QGKe-um9n-KAtM-mSIj-yfND-V0D9P1', 'scsi-0QEMU_QEMU_HARDDISK_abddbba1-0dc8-4b4d-8c33-018af0530e23', 'scsi-SQEMU_QEMU_HARDDISK_abddbba1-0dc8-4b4d-8c33-018af0530e23'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-10 00:56:52.311459 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--09201c46--e11a--5302--956e--912d17e7f9de-osd--block--09201c46--e11a--5302--956e--912d17e7f9de', 'dm-uuid-LVM-PftoxsgQ52yvPmleTAKNa8K0ekniLGTm5on5NexEjUZz0vte28H1F0vq32VvM5pA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-10 00:56:52.311469 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a684d377--5ec1--594b--83a4--e92528b1ce81-osd--block--a684d377--5ec1--594b--83a4--e92528b1ce81'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-k3QkEJ-MlaZ-9m4I-xd3v-1d2l-iFuh-tq8K6c', 'scsi-0QEMU_QEMU_HARDDISK_02e5e60d-aa8c-49f3-b265-76760abc52dd', 'scsi-SQEMU_QEMU_HARDDISK_02e5e60d-aa8c-49f3-b265-76760abc52dd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-10 00:56:52.311476 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0863171e--1302--565f--bee5--d18b6804a785-osd--block--0863171e--1302--565f--bee5--d18b6804a785', 'dm-uuid-LVM-eAqCUQR6qtojDcHqiCNGictIJZdU25jm3vNbBEnjKWJSAV63nUJ3xPpJV0I5T8w0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-10 00:56:52.311482 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42dd6803-c84e-4757-aa8c-571b5d9cbc16', 'scsi-SQEMU_QEMU_HARDDISK_42dd6803-c84e-4757-aa8c-571b5d9cbc16'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-10 00:56:52.311498 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'vir2026-04-10 00:56:52 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:56:52.311505 | orchestrator | tual': 1}})  2026-04-10 00:56:52.311512 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-10-00-03-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-10 00:56:52.311518 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:56:52.311523 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:56:52.311529 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:56:52.311534 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:56:52.311543 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:56:52.311549 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:56:52.311555 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:56:52.311566 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-10 00:56:52.311578 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21', 'scsi-SQEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21-part1', 'scsi-SQEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21-part14', 'scsi-SQEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21-part15', 'scsi-SQEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21-part16', 'scsi-SQEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-10 00:56:52.311586 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--09201c46--e11a--5302--956e--912d17e7f9de-osd--block--09201c46--e11a--5302--956e--912d17e7f9de'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gSDeM1-SD9t-OsNo-wjZN-B14N-pftC-NP9cBN', 'scsi-0QEMU_QEMU_HARDDISK_a4e1216f-fa74-4126-b451-31b29817bdec', 'scsi-SQEMU_QEMU_HARDDISK_a4e1216f-fa74-4126-b451-31b29817bdec'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-10 00:56:52.311593 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--0863171e--1302--565f--bee5--d18b6804a785-osd--block--0863171e--1302--565f--bee5--d18b6804a785'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-voZ47o-niq9-fm1G-HLxA-Byj8-Cq3I-INaUdT', 'scsi-0QEMU_QEMU_HARDDISK_9b5f2139-44b1-4420-a83a-35d7b8e164cf', 'scsi-SQEMU_QEMU_HARDDISK_9b5f2139-44b1-4420-a83a-35d7b8e164cf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-10 00:56:52.311603 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_433cfae2-239d-480b-959d-b8cd36270ab8', 'scsi-SQEMU_QEMU_HARDDISK_433cfae2-239d-480b-959d-b8cd36270ab8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-10 00:56:52.311640 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-10-00-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-10 00:56:52.311648 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:56:52.311654 | orchestrator | 2026-04-10 00:56:52.311660 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-10 00:56:52.311666 | orchestrator | Friday 10 April 2026 00:55:14 +0000 (0:00:00.531) 0:00:16.783 ********** 2026-04-10 00:56:52.311674 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4a24d887--4b45--578e--8445--fe6f68cb2659-osd--block--4a24d887--4b45--578e--8445--fe6f68cb2659', 'dm-uuid-LVM-HmBRIWxGLI3EGV6kV75sVNxgbPSB5omXHrIQIzDei9cb7WRNNAqcgK7AytWK3YKZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311681 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--83f5954c--7956--54fb--af17--18f84b92edf0-osd--block--83f5954c--7956--54fb--af17--18f84b92edf0', 'dm-uuid-LVM-8Osw97PfL7yOFGzOX4qgZueyeAhWhOmOkCSzx6ohrwri6Ap1yw3bOZM3asUyFbv6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311690 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311695 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311703 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311711 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311715 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311719 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311723 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311730 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--465b2d07--90ab--575b--b156--9a24eede9b64-osd--block--465b2d07--90ab--575b--b156--9a24eede9b64', 'dm-uuid-LVM-sz21BL9rKXHUXi7MHvzuiuEYOO4GuVHIzP8DshAgnCVBbJYANYokb9PpLuHhy1UX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311737 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311745 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a684d377--5ec1--594b--83a4--e92528b1ce81-osd--block--a684d377--5ec1--594b--83a4--e92528b1ce81', 'dm-uuid-LVM-bLjMtbKkcMY1XBDWcSo4rp9t2ScEoyS6X4oShYcxTkNtN21H8kUDn4qODgM2cnva'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311752 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae', 'scsi-SQEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae-part1', 'scsi-SQEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae-part14', 'scsi-SQEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae-part15', 'scsi-SQEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae-part16', 'scsi-SQEMU_QEMU_HARDDISK_f013a88d-cf1a-4ed1-a814-e61af314bdae-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311760 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311768 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4a24d887--4b45--578e--8445--fe6f68cb2659-osd--block--4a24d887--4b45--578e--8445--fe6f68cb2659'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JP9aDr-LzDf-aWue-EhD0-vBcD-llKo-fbqbH0', 'scsi-0QEMU_QEMU_HARDDISK_7df1152f-d9d4-4643-860e-92853d20f14a', 'scsi-SQEMU_QEMU_HARDDISK_7df1152f-d9d4-4643-860e-92853d20f14a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311775 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--83f5954c--7956--54fb--af17--18f84b92edf0-osd--block--83f5954c--7956--54fb--af17--18f84b92edf0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NoFq0I-grCm-XrVi-NRfm-Ddwc-OPpb-h3TY7p', 'scsi-0QEMU_QEMU_HARDDISK_c799235e-1f4d-413e-847e-76a649e6822e', 'scsi-SQEMU_QEMU_HARDDISK_c799235e-1f4d-413e-847e-76a649e6822e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311779 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311787 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_83ad9ae7-b217-4c7b-97e6-a7d535a7d755', 'scsi-SQEMU_QEMU_HARDDISK_83ad9ae7-b217-4c7b-97e6-a7d535a7d755'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311797 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311802 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-10-00-03-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311811 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311815 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:56:52.311820 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311824 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311829 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311839 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311849 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762', 'scsi-SQEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762-part1', 'scsi-SQEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762-part14', 'scsi-SQEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762-part15', 'scsi-SQEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762-part16', 'scsi-SQEMU_QEMU_HARDDISK_eb5308f5-155d-4496-84bb-67ce0f294762-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311856 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--09201c46--e11a--5302--956e--912d17e7f9de-osd--block--09201c46--e11a--5302--956e--912d17e7f9de', 'dm-uuid-LVM-PftoxsgQ52yvPmleTAKNa8K0ekniLGTm5on5NexEjUZz0vte28H1F0vq32VvM5pA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311868 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--465b2d07--90ab--575b--b156--9a24eede9b64-osd--block--465b2d07--90ab--575b--b156--9a24eede9b64'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TLAoeq-QGKe-um9n-KAtM-mSIj-yfND-V0D9P1', 'scsi-0QEMU_QEMU_HARDDISK_abddbba1-0dc8-4b4d-8c33-018af0530e23', 'scsi-SQEMU_QEMU_HARDDISK_abddbba1-0dc8-4b4d-8c33-018af0530e23'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311883 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0863171e--1302--565f--bee5--d18b6804a785-osd--block--0863171e--1302--565f--bee5--d18b6804a785', 'dm-uuid-LVM-eAqCUQR6qtojDcHqiCNGictIJZdU25jm3vNbBEnjKWJSAV63nUJ3xPpJV0I5T8w0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311893 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a684d377--5ec1--594b--83a4--e92528b1ce81-osd--block--a684d377--5ec1--594b--83a4--e92528b1ce81'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-k3QkEJ-MlaZ-9m4I-xd3v-1d2l-iFuh-tq8K6c', 'scsi-0QEMU_QEMU_HARDDISK_02e5e60d-aa8c-49f3-b265-76760abc52dd', 'scsi-SQEMU_QEMU_HARDDISK_02e5e60d-aa8c-49f3-b265-76760abc52dd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311899 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311905 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42dd6803-c84e-4757-aa8c-571b5d9cbc16', 'scsi-SQEMU_QEMU_HARDDISK_42dd6803-c84e-4757-aa8c-571b5d9cbc16'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311914 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311927 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-10-00-03-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311933 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311939 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:56:52.311950 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311956 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311960 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311964 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311974 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311982 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21', 'scsi-SQEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21-part1', 'scsi-SQEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21-part14', 'scsi-SQEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21-part15', 'scsi-SQEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21-part16', 'scsi-SQEMU_QEMU_HARDDISK_e69cb7c4-fa4f-49dd-a1aa-9a651f17aa21-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311986 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--09201c46--e11a--5302--956e--912d17e7f9de-osd--block--09201c46--e11a--5302--956e--912d17e7f9de'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gSDeM1-SD9t-OsNo-wjZN-B14N-pftC-NP9cBN', 'scsi-0QEMU_QEMU_HARDDISK_a4e1216f-fa74-4126-b451-31b29817bdec', 'scsi-SQEMU_QEMU_HARDDISK_a4e1216f-fa74-4126-b451-31b29817bdec'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.311996 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--0863171e--1302--565f--bee5--d18b6804a785-osd--block--0863171e--1302--565f--bee5--d18b6804a785'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-voZ47o-niq9-fm1G-HLxA-Byj8-Cq3I-INaUdT', 'scsi-0QEMU_QEMU_HARDDISK_9b5f2139-44b1-4420-a83a-35d7b8e164cf', 'scsi-SQEMU_QEMU_HARDDISK_9b5f2139-44b1-4420-a83a-35d7b8e164cf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.312000 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_433cfae2-239d-480b-959d-b8cd36270ab8', 'scsi-SQEMU_QEMU_HARDDISK_433cfae2-239d-480b-959d-b8cd36270ab8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.312008 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-10-00-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-10 00:56:52.312012 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:56:52.312016 | orchestrator | 2026-04-10 00:56:52.312020 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-10 00:56:52.312024 | orchestrator | Friday 10 April 2026 00:55:14 +0000 (0:00:00.588) 0:00:17.371 ********** 2026-04-10 00:56:52.312028 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:56:52.312032 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:56:52.312036 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:56:52.312039 | orchestrator | 2026-04-10 00:56:52.312043 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-10 00:56:52.312047 | orchestrator | Friday 10 April 2026 00:55:15 +0000 (0:00:00.661) 0:00:18.032 ********** 2026-04-10 00:56:52.312051 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:56:52.312055 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:56:52.312058 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:56:52.312062 | orchestrator | 2026-04-10 00:56:52.312066 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-10 00:56:52.312073 | orchestrator | Friday 10 April 2026 00:55:15 +0000 (0:00:00.522) 0:00:18.555 ********** 2026-04-10 00:56:52.312077 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:56:52.312080 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:56:52.312084 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:56:52.312088 | orchestrator | 2026-04-10 00:56:52.312092 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-10 00:56:52.312095 | orchestrator | Friday 10 April 2026 00:55:16 +0000 (0:00:00.625) 0:00:19.181 ********** 2026-04-10 00:56:52.312099 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:56:52.312103 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:56:52.312107 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:56:52.312111 | orchestrator | 2026-04-10 00:56:52.312114 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-10 00:56:52.312167 | orchestrator | Friday 10 April 2026 00:55:16 +0000 (0:00:00.269) 0:00:19.450 ********** 2026-04-10 00:56:52.312171 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:56:52.312175 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:56:52.312179 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:56:52.312182 | orchestrator | 2026-04-10 00:56:52.312186 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-10 00:56:52.312190 | orchestrator | Friday 10 April 2026 00:55:17 +0000 (0:00:00.405) 0:00:19.856 ********** 2026-04-10 00:56:52.312194 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:56:52.312198 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:56:52.312201 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:56:52.312205 | orchestrator | 2026-04-10 00:56:52.312209 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-10 00:56:52.312213 | orchestrator | Friday 10 April 2026 00:55:17 +0000 (0:00:00.514) 0:00:20.370 ********** 2026-04-10 00:56:52.312217 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-10 00:56:52.312224 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-10 00:56:52.312228 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-10 00:56:52.312232 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-10 00:56:52.312236 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-10 00:56:52.312240 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-10 00:56:52.312243 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-10 00:56:52.312247 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-10 00:56:52.312251 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-10 00:56:52.312255 | orchestrator | 2026-04-10 00:56:52.312259 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-10 00:56:52.312262 | orchestrator | Friday 10 April 2026 00:55:18 +0000 (0:00:00.873) 0:00:21.244 ********** 2026-04-10 00:56:52.312266 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-10 00:56:52.312270 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-10 00:56:52.312274 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-10 00:56:52.312278 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:56:52.312281 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-10 00:56:52.312285 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-10 00:56:52.312289 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-10 00:56:52.312293 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:56:52.312296 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-10 00:56:52.312300 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-10 00:56:52.312304 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-10 00:56:52.312307 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:56:52.312311 | orchestrator | 2026-04-10 00:56:52.312315 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-10 00:56:52.312323 | orchestrator | Friday 10 April 2026 00:55:19 +0000 (0:00:00.359) 0:00:21.604 ********** 2026-04-10 00:56:52.312327 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 00:56:52.312331 | orchestrator | 2026-04-10 00:56:52.312335 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-10 00:56:52.312342 | orchestrator | Friday 10 April 2026 00:55:19 +0000 (0:00:00.735) 0:00:22.339 ********** 2026-04-10 00:56:52.312346 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:56:52.312350 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:56:52.312353 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:56:52.312357 | orchestrator | 2026-04-10 00:56:52.312361 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-10 00:56:52.312365 | orchestrator | Friday 10 April 2026 00:55:20 +0000 (0:00:00.311) 0:00:22.650 ********** 2026-04-10 00:56:52.312368 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:56:52.312372 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:56:52.312376 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:56:52.312380 | orchestrator | 2026-04-10 00:56:52.312383 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-10 00:56:52.312387 | orchestrator | Friday 10 April 2026 00:55:20 +0000 (0:00:00.293) 0:00:22.944 ********** 2026-04-10 00:56:52.312391 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:56:52.312395 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:56:52.312399 | orchestrator | skipping: [testbed-node-5] 2026-04-10 00:56:52.312403 | orchestrator | 2026-04-10 00:56:52.312406 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-10 00:56:52.312410 | orchestrator | Friday 10 April 2026 00:55:20 +0000 (0:00:00.312) 0:00:23.256 ********** 2026-04-10 00:56:52.312414 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:56:52.312418 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:56:52.312421 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:56:52.312425 | orchestrator | 2026-04-10 00:56:52.312429 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-10 00:56:52.312433 | orchestrator | Friday 10 April 2026 00:55:21 +0000 (0:00:00.611) 0:00:23.867 ********** 2026-04-10 00:56:52.312436 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-10 00:56:52.312440 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-10 00:56:52.312444 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-10 00:56:52.312447 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:56:52.312451 | orchestrator | 2026-04-10 00:56:52.312455 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-10 00:56:52.312459 | orchestrator | Friday 10 April 2026 00:55:21 +0000 (0:00:00.360) 0:00:24.228 ********** 2026-04-10 00:56:52.312462 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-10 00:56:52.312466 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-10 00:56:52.312470 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-10 00:56:52.312474 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:56:52.312477 | orchestrator | 2026-04-10 00:56:52.312481 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-10 00:56:52.312485 | orchestrator | Friday 10 April 2026 00:55:22 +0000 (0:00:00.406) 0:00:24.634 ********** 2026-04-10 00:56:52.312488 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-10 00:56:52.312492 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-10 00:56:52.312496 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-10 00:56:52.312500 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:56:52.312503 | orchestrator | 2026-04-10 00:56:52.312507 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-10 00:56:52.312511 | orchestrator | Friday 10 April 2026 00:55:22 +0000 (0:00:00.384) 0:00:25.019 ********** 2026-04-10 00:56:52.312519 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:56:52.312527 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:56:52.312531 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:56:52.312535 | orchestrator | 2026-04-10 00:56:52.312538 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-10 00:56:52.312542 | orchestrator | Friday 10 April 2026 00:55:22 +0000 (0:00:00.308) 0:00:25.327 ********** 2026-04-10 00:56:52.312546 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-10 00:56:52.312550 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-10 00:56:52.312554 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-10 00:56:52.312557 | orchestrator | 2026-04-10 00:56:52.312561 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-10 00:56:52.312565 | orchestrator | Friday 10 April 2026 00:55:23 +0000 (0:00:00.515) 0:00:25.843 ********** 2026-04-10 00:56:52.312569 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-10 00:56:52.312573 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-10 00:56:52.312576 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-10 00:56:52.312580 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-10 00:56:52.312584 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-10 00:56:52.312588 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-10 00:56:52.312592 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-10 00:56:52.312595 | orchestrator | 2026-04-10 00:56:52.312599 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-10 00:56:52.312603 | orchestrator | Friday 10 April 2026 00:55:24 +0000 (0:00:00.985) 0:00:26.829 ********** 2026-04-10 00:56:52.312607 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-10 00:56:52.312610 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-10 00:56:52.312614 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-10 00:56:52.312618 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-10 00:56:52.312622 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-10 00:56:52.312628 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-10 00:56:52.312632 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-10 00:56:52.312636 | orchestrator | 2026-04-10 00:56:52.312639 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-04-10 00:56:52.312643 | orchestrator | Friday 10 April 2026 00:55:26 +0000 (0:00:02.003) 0:00:28.832 ********** 2026-04-10 00:56:52.312647 | orchestrator | skipping: [testbed-node-3] 2026-04-10 00:56:52.312650 | orchestrator | skipping: [testbed-node-4] 2026-04-10 00:56:52.312654 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-04-10 00:56:52.312658 | orchestrator | 2026-04-10 00:56:52.312662 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-04-10 00:56:52.312665 | orchestrator | Friday 10 April 2026 00:55:26 +0000 (0:00:00.371) 0:00:29.204 ********** 2026-04-10 00:56:52.312671 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-10 00:56:52.312677 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-10 00:56:52.312686 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-10 00:56:52.312692 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-10 00:56:52.312698 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-10 00:56:52.312704 | orchestrator | 2026-04-10 00:56:52.312709 | orchestrator | TASK [generate keys] *********************************************************** 2026-04-10 00:56:52.312714 | orchestrator | Friday 10 April 2026 00:56:05 +0000 (0:00:38.949) 0:01:08.153 ********** 2026-04-10 00:56:52.312719 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-10 00:56:52.312727 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-10 00:56:52.312733 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-10 00:56:52.312738 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-10 00:56:52.312743 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-10 00:56:52.312748 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-10 00:56:52.312755 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-04-10 00:56:52.312761 | orchestrator | 2026-04-10 00:56:52.312767 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-04-10 00:56:52.312772 | orchestrator | Friday 10 April 2026 00:56:22 +0000 (0:00:17.023) 0:01:25.177 ********** 2026-04-10 00:56:52.312777 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-10 00:56:52.312782 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-10 00:56:52.312788 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-10 00:56:52.312793 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-10 00:56:52.312799 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-10 00:56:52.312805 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-10 00:56:52.312810 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-10 00:56:52.312817 | orchestrator | 2026-04-10 00:56:52.312822 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-04-10 00:56:52.312829 | orchestrator | Friday 10 April 2026 00:56:31 +0000 (0:00:09.274) 0:01:34.451 ********** 2026-04-10 00:56:52.312834 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-10 00:56:52.312840 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-10 00:56:52.312845 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-10 00:56:52.312851 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-10 00:56:52.312862 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-10 00:56:52.312869 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-10 00:56:52.312875 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-10 00:56:52.312888 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-10 00:56:52.312895 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-10 00:56:52.312901 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-10 00:56:52.312908 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-10 00:56:52.312914 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-10 00:56:52.312921 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-10 00:56:52.312927 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-10 00:56:52.312935 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-10 00:56:52.312943 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-10 00:56:52.312952 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-10 00:56:52.312957 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-10 00:56:52.312963 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-04-10 00:56:52.312969 | orchestrator | 2026-04-10 00:56:52.312975 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:56:52.312980 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-10 00:56:52.312987 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-10 00:56:52.312993 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-10 00:56:52.312998 | orchestrator | 2026-04-10 00:56:52.313007 | orchestrator | 2026-04-10 00:56:52.313029 | orchestrator | 2026-04-10 00:56:52.313034 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:56:52.313040 | orchestrator | Friday 10 April 2026 00:56:49 +0000 (0:00:17.631) 0:01:52.082 ********** 2026-04-10 00:56:52.313046 | orchestrator | =============================================================================== 2026-04-10 00:56:52.313051 | orchestrator | create openstack pool(s) ----------------------------------------------- 38.95s 2026-04-10 00:56:52.313057 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.63s 2026-04-10 00:56:52.313062 | orchestrator | generate keys ---------------------------------------------------------- 17.02s 2026-04-10 00:56:52.313068 | orchestrator | get keys from monitors -------------------------------------------------- 9.27s 2026-04-10 00:56:52.313079 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.01s 2026-04-10 00:56:52.313083 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.00s 2026-04-10 00:56:52.313087 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.39s 2026-04-10 00:56:52.313091 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.99s 2026-04-10 00:56:52.313094 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.99s 2026-04-10 00:56:52.313098 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.87s 2026-04-10 00:56:52.313102 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.81s 2026-04-10 00:56:52.313106 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.74s 2026-04-10 00:56:52.313109 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.67s 2026-04-10 00:56:52.313113 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.66s 2026-04-10 00:56:52.313131 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.63s 2026-04-10 00:56:52.313140 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.63s 2026-04-10 00:56:52.313144 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.62s 2026-04-10 00:56:52.313148 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.61s 2026-04-10 00:56:52.313151 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.59s 2026-04-10 00:56:52.313155 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.53s 2026-04-10 00:56:55.350614 | orchestrator | 2026-04-10 00:56:55 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:56:55.351421 | orchestrator | 2026-04-10 00:56:55 | INFO  | Task b1efc256-bd99-4e5c-8ff5-bf62cca1eb74 is in state STARTED 2026-04-10 00:56:55.352921 | orchestrator | 2026-04-10 00:56:55 | INFO  | Task 17d36447-3d0d-4f9b-b57e-fb93c293f26a is in state STARTED 2026-04-10 00:56:55.354488 | orchestrator | 2026-04-10 00:56:55 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:56:58.414822 | orchestrator | 2026-04-10 00:56:58 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:56:58.416264 | orchestrator | 2026-04-10 00:56:58 | INFO  | Task b1efc256-bd99-4e5c-8ff5-bf62cca1eb74 is in state STARTED 2026-04-10 00:56:58.418391 | orchestrator | 2026-04-10 00:56:58 | INFO  | Task 17d36447-3d0d-4f9b-b57e-fb93c293f26a is in state STARTED 2026-04-10 00:56:58.418459 | orchestrator | 2026-04-10 00:56:58 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:57:01.471996 | orchestrator | 2026-04-10 00:57:01 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:57:01.472642 | orchestrator | 2026-04-10 00:57:01 | INFO  | Task b1efc256-bd99-4e5c-8ff5-bf62cca1eb74 is in state STARTED 2026-04-10 00:57:01.474579 | orchestrator | 2026-04-10 00:57:01 | INFO  | Task 17d36447-3d0d-4f9b-b57e-fb93c293f26a is in state STARTED 2026-04-10 00:57:01.474624 | orchestrator | 2026-04-10 00:57:01 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:57:04.529222 | orchestrator | 2026-04-10 00:57:04 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:57:04.530885 | orchestrator | 2026-04-10 00:57:04 | INFO  | Task b1efc256-bd99-4e5c-8ff5-bf62cca1eb74 is in state STARTED 2026-04-10 00:57:04.532973 | orchestrator | 2026-04-10 00:57:04 | INFO  | Task 17d36447-3d0d-4f9b-b57e-fb93c293f26a is in state STARTED 2026-04-10 00:57:04.533288 | orchestrator | 2026-04-10 00:57:04 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:57:07.592995 | orchestrator | 2026-04-10 00:57:07 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:57:07.594338 | orchestrator | 2026-04-10 00:57:07 | INFO  | Task b1efc256-bd99-4e5c-8ff5-bf62cca1eb74 is in state STARTED 2026-04-10 00:57:07.596069 | orchestrator | 2026-04-10 00:57:07 | INFO  | Task 17d36447-3d0d-4f9b-b57e-fb93c293f26a is in state STARTED 2026-04-10 00:57:07.596150 | orchestrator | 2026-04-10 00:57:07 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:57:10.643184 | orchestrator | 2026-04-10 00:57:10 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:57:10.644846 | orchestrator | 2026-04-10 00:57:10 | INFO  | Task b1efc256-bd99-4e5c-8ff5-bf62cca1eb74 is in state STARTED 2026-04-10 00:57:10.646920 | orchestrator | 2026-04-10 00:57:10 | INFO  | Task 17d36447-3d0d-4f9b-b57e-fb93c293f26a is in state STARTED 2026-04-10 00:57:10.647196 | orchestrator | 2026-04-10 00:57:10 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:57:13.709850 | orchestrator | 2026-04-10 00:57:13 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:57:13.712160 | orchestrator | 2026-04-10 00:57:13 | INFO  | Task b1efc256-bd99-4e5c-8ff5-bf62cca1eb74 is in state STARTED 2026-04-10 00:57:13.714261 | orchestrator | 2026-04-10 00:57:13 | INFO  | Task 17d36447-3d0d-4f9b-b57e-fb93c293f26a is in state STARTED 2026-04-10 00:57:13.714337 | orchestrator | 2026-04-10 00:57:13 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:57:16.757952 | orchestrator | 2026-04-10 00:57:16 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:57:16.758871 | orchestrator | 2026-04-10 00:57:16 | INFO  | Task b1efc256-bd99-4e5c-8ff5-bf62cca1eb74 is in state STARTED 2026-04-10 00:57:16.760307 | orchestrator | 2026-04-10 00:57:16 | INFO  | Task 17d36447-3d0d-4f9b-b57e-fb93c293f26a is in state STARTED 2026-04-10 00:57:16.760357 | orchestrator | 2026-04-10 00:57:16 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:57:19.812193 | orchestrator | 2026-04-10 00:57:19 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:57:19.813260 | orchestrator | 2026-04-10 00:57:19 | INFO  | Task b1efc256-bd99-4e5c-8ff5-bf62cca1eb74 is in state STARTED 2026-04-10 00:57:19.816528 | orchestrator | 2026-04-10 00:57:19 | INFO  | Task 17d36447-3d0d-4f9b-b57e-fb93c293f26a is in state STARTED 2026-04-10 00:57:19.816620 | orchestrator | 2026-04-10 00:57:19 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:57:22.864379 | orchestrator | 2026-04-10 00:57:22 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:57:22.865281 | orchestrator | 2026-04-10 00:57:22 | INFO  | Task b1efc256-bd99-4e5c-8ff5-bf62cca1eb74 is in state STARTED 2026-04-10 00:57:22.866370 | orchestrator | 2026-04-10 00:57:22 | INFO  | Task 17d36447-3d0d-4f9b-b57e-fb93c293f26a is in state STARTED 2026-04-10 00:57:22.866424 | orchestrator | 2026-04-10 00:57:22 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:57:25.915675 | orchestrator | 2026-04-10 00:57:25 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:57:25.917177 | orchestrator | 2026-04-10 00:57:25 | INFO  | Task b1efc256-bd99-4e5c-8ff5-bf62cca1eb74 is in state STARTED 2026-04-10 00:57:25.919609 | orchestrator | 2026-04-10 00:57:25 | INFO  | Task 17d36447-3d0d-4f9b-b57e-fb93c293f26a is in state STARTED 2026-04-10 00:57:25.919651 | orchestrator | 2026-04-10 00:57:25 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:57:28.969172 | orchestrator | 2026-04-10 00:57:28 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:57:28.969916 | orchestrator | 2026-04-10 00:57:28 | INFO  | Task b1efc256-bd99-4e5c-8ff5-bf62cca1eb74 is in state SUCCESS 2026-04-10 00:57:28.972198 | orchestrator | 2026-04-10 00:57:28 | INFO  | Task 17d36447-3d0d-4f9b-b57e-fb93c293f26a is in state STARTED 2026-04-10 00:57:28.972225 | orchestrator | 2026-04-10 00:57:28 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:57:32.028415 | orchestrator | 2026-04-10 00:57:32 | INFO  | Task fcece417-d2a3-41fc-b578-c97d18b614eb is in state STARTED 2026-04-10 00:57:32.030568 | orchestrator | 2026-04-10 00:57:32 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:57:32.032495 | orchestrator | 2026-04-10 00:57:32 | INFO  | Task 17d36447-3d0d-4f9b-b57e-fb93c293f26a is in state STARTED 2026-04-10 00:57:32.032552 | orchestrator | 2026-04-10 00:57:32 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:57:35.069703 | orchestrator | 2026-04-10 00:57:35 | INFO  | Task fcece417-d2a3-41fc-b578-c97d18b614eb is in state STARTED 2026-04-10 00:57:35.070694 | orchestrator | 2026-04-10 00:57:35 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:57:35.071542 | orchestrator | 2026-04-10 00:57:35 | INFO  | Task 17d36447-3d0d-4f9b-b57e-fb93c293f26a is in state STARTED 2026-04-10 00:57:35.071579 | orchestrator | 2026-04-10 00:57:35 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:57:38.138890 | orchestrator | 2026-04-10 00:57:38 | INFO  | Task fcece417-d2a3-41fc-b578-c97d18b614eb is in state STARTED 2026-04-10 00:57:38.139500 | orchestrator | 2026-04-10 00:57:38 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:57:38.142735 | orchestrator | 2026-04-10 00:57:38 | INFO  | Task 17d36447-3d0d-4f9b-b57e-fb93c293f26a is in state STARTED 2026-04-10 00:57:38.142809 | orchestrator | 2026-04-10 00:57:38 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:57:41.195315 | orchestrator | 2026-04-10 00:57:41 | INFO  | Task fcece417-d2a3-41fc-b578-c97d18b614eb is in state STARTED 2026-04-10 00:57:41.195393 | orchestrator | 2026-04-10 00:57:41 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:57:41.197633 | orchestrator | 2026-04-10 00:57:41 | INFO  | Task 17d36447-3d0d-4f9b-b57e-fb93c293f26a is in state STARTED 2026-04-10 00:57:41.197793 | orchestrator | 2026-04-10 00:57:41 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:57:44.243395 | orchestrator | 2026-04-10 00:57:44 | INFO  | Task fcece417-d2a3-41fc-b578-c97d18b614eb is in state STARTED 2026-04-10 00:57:44.244442 | orchestrator | 2026-04-10 00:57:44 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:57:44.245901 | orchestrator | 2026-04-10 00:57:44 | INFO  | Task 17d36447-3d0d-4f9b-b57e-fb93c293f26a is in state STARTED 2026-04-10 00:57:44.245926 | orchestrator | 2026-04-10 00:57:44 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:57:47.282820 | orchestrator | 2026-04-10 00:57:47 | INFO  | Task fcece417-d2a3-41fc-b578-c97d18b614eb is in state STARTED 2026-04-10 00:57:47.284554 | orchestrator | 2026-04-10 00:57:47 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:57:47.287291 | orchestrator | 2026-04-10 00:57:47 | INFO  | Task 17d36447-3d0d-4f9b-b57e-fb93c293f26a is in state SUCCESS 2026-04-10 00:57:47.288582 | orchestrator | 2026-04-10 00:57:47.288608 | orchestrator | 2026-04-10 00:57:47.288613 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-04-10 00:57:47.288618 | orchestrator | 2026-04-10 00:57:47.288622 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-04-10 00:57:47.288627 | orchestrator | Friday 10 April 2026 00:56:52 +0000 (0:00:00.199) 0:00:00.199 ********** 2026-04-10 00:57:47.288632 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-10 00:57:47.288637 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-10 00:57:47.288641 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-10 00:57:47.288646 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-10 00:57:47.288653 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-10 00:57:47.288659 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-10 00:57:47.288665 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-10 00:57:47.288672 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-10 00:57:47.288682 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-10 00:57:47.288711 | orchestrator | 2026-04-10 00:57:47.288717 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-04-10 00:57:47.288723 | orchestrator | Friday 10 April 2026 00:56:57 +0000 (0:00:04.878) 0:00:05.078 ********** 2026-04-10 00:57:47.288729 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-10 00:57:47.288735 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-10 00:57:47.288741 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-10 00:57:47.288747 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-10 00:57:47.288753 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-10 00:57:47.288760 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-10 00:57:47.288766 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-10 00:57:47.288773 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-10 00:57:47.288778 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-10 00:57:47.288782 | orchestrator | 2026-04-10 00:57:47.288786 | orchestrator | TASK [Create share directory] ************************************************** 2026-04-10 00:57:47.288790 | orchestrator | Friday 10 April 2026 00:57:01 +0000 (0:00:04.381) 0:00:09.460 ********** 2026-04-10 00:57:47.288795 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-10 00:57:47.288799 | orchestrator | 2026-04-10 00:57:47.288803 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-04-10 00:57:47.288807 | orchestrator | Friday 10 April 2026 00:57:02 +0000 (0:00:01.073) 0:00:10.534 ********** 2026-04-10 00:57:47.288811 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-04-10 00:57:47.288816 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-10 00:57:47.288820 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-10 00:57:47.288834 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-04-10 00:57:47.288838 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-10 00:57:47.288842 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-04-10 00:57:47.288845 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-04-10 00:57:47.288849 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-04-10 00:57:47.288853 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-04-10 00:57:47.288857 | orchestrator | 2026-04-10 00:57:47.288860 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-04-10 00:57:47.288911 | orchestrator | Friday 10 April 2026 00:57:17 +0000 (0:00:14.714) 0:00:25.248 ********** 2026-04-10 00:57:47.288916 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-04-10 00:57:47.288919 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-04-10 00:57:47.288924 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-10 00:57:47.288927 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-10 00:57:47.288940 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-10 00:57:47.289043 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-10 00:57:47.289050 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-04-10 00:57:47.289079 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-04-10 00:57:47.289084 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-04-10 00:57:47.289093 | orchestrator | 2026-04-10 00:57:47.289101 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-04-10 00:57:47.289107 | orchestrator | Friday 10 April 2026 00:57:20 +0000 (0:00:03.293) 0:00:28.543 ********** 2026-04-10 00:57:47.289114 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-04-10 00:57:47.289120 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-10 00:57:47.289126 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-10 00:57:47.289132 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-04-10 00:57:47.289152 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-10 00:57:47.289159 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-04-10 00:57:47.289163 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-04-10 00:57:47.289167 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-04-10 00:57:47.289171 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-04-10 00:57:47.289175 | orchestrator | 2026-04-10 00:57:47.289178 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:57:47.289182 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:57:47.289188 | orchestrator | 2026-04-10 00:57:47.289192 | orchestrator | 2026-04-10 00:57:47.289196 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:57:47.289199 | orchestrator | Friday 10 April 2026 00:57:28 +0000 (0:00:07.398) 0:00:35.941 ********** 2026-04-10 00:57:47.289203 | orchestrator | =============================================================================== 2026-04-10 00:57:47.289207 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.71s 2026-04-10 00:57:47.289210 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.40s 2026-04-10 00:57:47.289214 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.88s 2026-04-10 00:57:47.289218 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.38s 2026-04-10 00:57:47.289222 | orchestrator | Check if target directories exist --------------------------------------- 3.29s 2026-04-10 00:57:47.289225 | orchestrator | Create share directory -------------------------------------------------- 1.07s 2026-04-10 00:57:47.289229 | orchestrator | 2026-04-10 00:57:47.289233 | orchestrator | 2026-04-10 00:57:47.289237 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-10 00:57:47.289240 | orchestrator | 2026-04-10 00:57:47.289244 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-10 00:57:47.289248 | orchestrator | Friday 10 April 2026 00:56:07 +0000 (0:00:00.298) 0:00:00.299 ********** 2026-04-10 00:57:47.289252 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:57:47.289256 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:57:47.289260 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:57:47.289264 | orchestrator | 2026-04-10 00:57:47.289267 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-10 00:57:47.289271 | orchestrator | Friday 10 April 2026 00:56:07 +0000 (0:00:00.289) 0:00:00.588 ********** 2026-04-10 00:57:47.289275 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-04-10 00:57:47.289280 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-04-10 00:57:47.289295 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-04-10 00:57:47.289299 | orchestrator | 2026-04-10 00:57:47.289303 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-04-10 00:57:47.289306 | orchestrator | 2026-04-10 00:57:47.289310 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-10 00:57:47.289314 | orchestrator | Friday 10 April 2026 00:56:08 +0000 (0:00:00.309) 0:00:00.898 ********** 2026-04-10 00:57:47.289317 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:57:47.289321 | orchestrator | 2026-04-10 00:57:47.289325 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-04-10 00:57:47.289329 | orchestrator | Friday 10 April 2026 00:56:08 +0000 (0:00:00.635) 0:00:01.533 ********** 2026-04-10 00:57:47.289344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-10 00:57:47.289354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-10 00:57:47.289369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-10 00:57:47.289373 | orchestrator | 2026-04-10 00:57:47.289378 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-04-10 00:57:47.289381 | orchestrator | Friday 10 April 2026 00:56:10 +0000 (0:00:01.617) 0:00:03.151 ********** 2026-04-10 00:57:47.289385 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:57:47.289389 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:57:47.289393 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:57:47.289397 | orchestrator | 2026-04-10 00:57:47.289401 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-10 00:57:47.289404 | orchestrator | Friday 10 April 2026 00:56:10 +0000 (0:00:00.296) 0:00:03.448 ********** 2026-04-10 00:57:47.289408 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-10 00:57:47.289416 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-10 00:57:47.289420 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-04-10 00:57:47.289423 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-04-10 00:57:47.289427 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-04-10 00:57:47.289431 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-04-10 00:57:47.289435 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-04-10 00:57:47.289439 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-04-10 00:57:47.289448 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-10 00:57:47.289453 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-10 00:57:47.289459 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-04-10 00:57:47.289465 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-04-10 00:57:47.289471 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-04-10 00:57:47.289476 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-04-10 00:57:47.289482 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-04-10 00:57:47.289488 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-04-10 00:57:47.289494 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-10 00:57:47.289500 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-10 00:57:47.289506 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-04-10 00:57:47.289512 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-04-10 00:57:47.289518 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-04-10 00:57:47.289524 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-04-10 00:57:47.289534 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-04-10 00:57:47.289540 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-04-10 00:57:47.289547 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-04-10 00:57:47.289554 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-04-10 00:57:47.289558 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-04-10 00:57:47.289562 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-04-10 00:57:47.289567 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-04-10 00:57:47.289573 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-04-10 00:57:47.289578 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-04-10 00:57:47.289596 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-04-10 00:57:47.289601 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-04-10 00:57:47.289607 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-04-10 00:57:47.289613 | orchestrator | 2026-04-10 00:57:47.289619 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-10 00:57:47.289625 | orchestrator | Friday 10 April 2026 00:56:11 +0000 (0:00:00.720) 0:00:04.168 ********** 2026-04-10 00:57:47.289631 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:57:47.289636 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:57:47.289641 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:57:47.289647 | orchestrator | 2026-04-10 00:57:47.289653 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-10 00:57:47.289659 | orchestrator | Friday 10 April 2026 00:56:11 +0000 (0:00:00.489) 0:00:04.658 ********** 2026-04-10 00:57:47.289664 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:57:47.289671 | orchestrator | 2026-04-10 00:57:47.289677 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-10 00:57:47.289683 | orchestrator | Friday 10 April 2026 00:56:12 +0000 (0:00:00.128) 0:00:04.786 ********** 2026-04-10 00:57:47.289689 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:57:47.289696 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:57:47.289703 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:57:47.289709 | orchestrator | 2026-04-10 00:57:47.289716 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-10 00:57:47.289722 | orchestrator | Friday 10 April 2026 00:56:12 +0000 (0:00:00.272) 0:00:05.059 ********** 2026-04-10 00:57:47.289728 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:57:47.289734 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:57:47.289738 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:57:47.289742 | orchestrator | 2026-04-10 00:57:47.289746 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-10 00:57:47.289759 | orchestrator | Friday 10 April 2026 00:56:12 +0000 (0:00:00.288) 0:00:05.347 ********** 2026-04-10 00:57:47.289763 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:57:47.289768 | orchestrator | 2026-04-10 00:57:47.289772 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-10 00:57:47.289776 | orchestrator | Friday 10 April 2026 00:56:12 +0000 (0:00:00.124) 0:00:05.471 ********** 2026-04-10 00:57:47.289780 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:57:47.289785 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:57:47.289789 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:57:47.289793 | orchestrator | 2026-04-10 00:57:47.289799 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-10 00:57:47.289805 | orchestrator | Friday 10 April 2026 00:56:13 +0000 (0:00:00.431) 0:00:05.902 ********** 2026-04-10 00:57:47.289814 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:57:47.289822 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:57:47.289828 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:57:47.289834 | orchestrator | 2026-04-10 00:57:47.289840 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-10 00:57:47.289845 | orchestrator | Friday 10 April 2026 00:56:13 +0000 (0:00:00.269) 0:00:06.172 ********** 2026-04-10 00:57:47.289851 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:57:47.289857 | orchestrator | 2026-04-10 00:57:47.289863 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-10 00:57:47.289869 | orchestrator | Friday 10 April 2026 00:56:13 +0000 (0:00:00.112) 0:00:06.284 ********** 2026-04-10 00:57:47.289875 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:57:47.289882 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:57:47.289895 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:57:47.289901 | orchestrator | 2026-04-10 00:57:47.289907 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-10 00:57:47.289919 | orchestrator | Friday 10 April 2026 00:56:13 +0000 (0:00:00.279) 0:00:06.563 ********** 2026-04-10 00:57:47.289924 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:57:47.289927 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:57:47.289931 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:57:47.289935 | orchestrator | 2026-04-10 00:57:47.289939 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-10 00:57:47.289943 | orchestrator | Friday 10 April 2026 00:56:14 +0000 (0:00:00.285) 0:00:06.849 ********** 2026-04-10 00:57:47.289946 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:57:47.289950 | orchestrator | 2026-04-10 00:57:47.289954 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-10 00:57:47.289958 | orchestrator | Friday 10 April 2026 00:56:14 +0000 (0:00:00.158) 0:00:07.007 ********** 2026-04-10 00:57:47.289962 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:57:47.289965 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:57:47.289969 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:57:47.289973 | orchestrator | 2026-04-10 00:57:47.289977 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-10 00:57:47.289982 | orchestrator | Friday 10 April 2026 00:56:14 +0000 (0:00:00.411) 0:00:07.418 ********** 2026-04-10 00:57:47.289987 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:57:47.289993 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:57:47.290001 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:57:47.290009 | orchestrator | 2026-04-10 00:57:47.290074 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-10 00:57:47.290083 | orchestrator | Friday 10 April 2026 00:56:14 +0000 (0:00:00.281) 0:00:07.699 ********** 2026-04-10 00:57:47.290090 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:57:47.290096 | orchestrator | 2026-04-10 00:57:47.290101 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-10 00:57:47.290107 | orchestrator | Friday 10 April 2026 00:56:15 +0000 (0:00:00.131) 0:00:07.831 ********** 2026-04-10 00:57:47.290114 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:57:47.290120 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:57:47.290126 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:57:47.290132 | orchestrator | 2026-04-10 00:57:47.290138 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-10 00:57:47.290144 | orchestrator | Friday 10 April 2026 00:56:15 +0000 (0:00:00.323) 0:00:08.154 ********** 2026-04-10 00:57:47.290148 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:57:47.290152 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:57:47.290155 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:57:47.290159 | orchestrator | 2026-04-10 00:57:47.290163 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-10 00:57:47.290167 | orchestrator | Friday 10 April 2026 00:56:15 +0000 (0:00:00.478) 0:00:08.632 ********** 2026-04-10 00:57:47.290171 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:57:47.290174 | orchestrator | 2026-04-10 00:57:47.290178 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-10 00:57:47.290182 | orchestrator | Friday 10 April 2026 00:56:16 +0000 (0:00:00.118) 0:00:08.750 ********** 2026-04-10 00:57:47.290185 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:57:47.290189 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:57:47.290193 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:57:47.290196 | orchestrator | 2026-04-10 00:57:47.290200 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-10 00:57:47.290204 | orchestrator | Friday 10 April 2026 00:56:16 +0000 (0:00:00.288) 0:00:09.039 ********** 2026-04-10 00:57:47.290208 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:57:47.290211 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:57:47.290215 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:57:47.290224 | orchestrator | 2026-04-10 00:57:47.290228 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-10 00:57:47.290232 | orchestrator | Friday 10 April 2026 00:56:16 +0000 (0:00:00.282) 0:00:09.322 ********** 2026-04-10 00:57:47.290236 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:57:47.290240 | orchestrator | 2026-04-10 00:57:47.290243 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-10 00:57:47.290247 | orchestrator | Friday 10 April 2026 00:56:16 +0000 (0:00:00.139) 0:00:09.462 ********** 2026-04-10 00:57:47.290251 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:57:47.290254 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:57:47.290258 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:57:47.290262 | orchestrator | 2026-04-10 00:57:47.290270 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-10 00:57:47.290274 | orchestrator | Friday 10 April 2026 00:56:16 +0000 (0:00:00.270) 0:00:09.732 ********** 2026-04-10 00:57:47.290278 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:57:47.290281 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:57:47.290285 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:57:47.290289 | orchestrator | 2026-04-10 00:57:47.290292 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-10 00:57:47.290296 | orchestrator | Friday 10 April 2026 00:56:17 +0000 (0:00:00.366) 0:00:10.098 ********** 2026-04-10 00:57:47.290300 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:57:47.290304 | orchestrator | 2026-04-10 00:57:47.290307 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-10 00:57:47.290311 | orchestrator | Friday 10 April 2026 00:56:17 +0000 (0:00:00.090) 0:00:10.189 ********** 2026-04-10 00:57:47.290315 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:57:47.290318 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:57:47.290322 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:57:47.290326 | orchestrator | 2026-04-10 00:57:47.290329 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-10 00:57:47.290333 | orchestrator | Friday 10 April 2026 00:56:17 +0000 (0:00:00.247) 0:00:10.437 ********** 2026-04-10 00:57:47.290337 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:57:47.290341 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:57:47.290344 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:57:47.290348 | orchestrator | 2026-04-10 00:57:47.290352 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-10 00:57:47.290355 | orchestrator | Friday 10 April 2026 00:56:17 +0000 (0:00:00.252) 0:00:10.689 ********** 2026-04-10 00:57:47.290359 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:57:47.290363 | orchestrator | 2026-04-10 00:57:47.290372 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-10 00:57:47.290376 | orchestrator | Friday 10 April 2026 00:56:18 +0000 (0:00:00.106) 0:00:10.795 ********** 2026-04-10 00:57:47.290380 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:57:47.290384 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:57:47.290387 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:57:47.290391 | orchestrator | 2026-04-10 00:57:47.290395 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-10 00:57:47.290398 | orchestrator | Friday 10 April 2026 00:56:18 +0000 (0:00:00.236) 0:00:11.032 ********** 2026-04-10 00:57:47.290402 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:57:47.290406 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:57:47.290410 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:57:47.290413 | orchestrator | 2026-04-10 00:57:47.290417 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-10 00:57:47.290421 | orchestrator | Friday 10 April 2026 00:56:18 +0000 (0:00:00.361) 0:00:11.394 ********** 2026-04-10 00:57:47.290424 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:57:47.290428 | orchestrator | 2026-04-10 00:57:47.290432 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-10 00:57:47.290436 | orchestrator | Friday 10 April 2026 00:56:18 +0000 (0:00:00.109) 0:00:11.503 ********** 2026-04-10 00:57:47.290443 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:57:47.290446 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:57:47.290450 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:57:47.290454 | orchestrator | 2026-04-10 00:57:47.290457 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-04-10 00:57:47.290461 | orchestrator | Friday 10 April 2026 00:56:18 +0000 (0:00:00.229) 0:00:11.733 ********** 2026-04-10 00:57:47.290465 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:57:47.290469 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:57:47.290472 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:57:47.290476 | orchestrator | 2026-04-10 00:57:47.290480 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-04-10 00:57:47.290483 | orchestrator | Friday 10 April 2026 00:56:20 +0000 (0:00:01.392) 0:00:13.126 ********** 2026-04-10 00:57:47.290487 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-10 00:57:47.290491 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-10 00:57:47.290495 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-10 00:57:47.290499 | orchestrator | 2026-04-10 00:57:47.290502 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-04-10 00:57:47.290506 | orchestrator | Friday 10 April 2026 00:56:22 +0000 (0:00:01.832) 0:00:14.958 ********** 2026-04-10 00:57:47.290510 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-10 00:57:47.290514 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-10 00:57:47.290517 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-10 00:57:47.290521 | orchestrator | 2026-04-10 00:57:47.290525 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-04-10 00:57:47.290529 | orchestrator | Friday 10 April 2026 00:56:23 +0000 (0:00:01.648) 0:00:16.607 ********** 2026-04-10 00:57:47.290533 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-10 00:57:47.290536 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-10 00:57:47.290540 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-10 00:57:47.290544 | orchestrator | 2026-04-10 00:57:47.290548 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-04-10 00:57:47.290551 | orchestrator | Friday 10 April 2026 00:56:25 +0000 (0:00:01.539) 0:00:18.146 ********** 2026-04-10 00:57:47.290555 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:57:47.290559 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:57:47.290565 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:57:47.290569 | orchestrator | 2026-04-10 00:57:47.290573 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-04-10 00:57:47.290577 | orchestrator | Friday 10 April 2026 00:56:25 +0000 (0:00:00.297) 0:00:18.443 ********** 2026-04-10 00:57:47.290580 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:57:47.290584 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:57:47.290588 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:57:47.290591 | orchestrator | 2026-04-10 00:57:47.290595 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-10 00:57:47.290599 | orchestrator | Friday 10 April 2026 00:56:25 +0000 (0:00:00.283) 0:00:18.726 ********** 2026-04-10 00:57:47.290602 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:57:47.290606 | orchestrator | 2026-04-10 00:57:47.290610 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-04-10 00:57:47.290618 | orchestrator | Friday 10 April 2026 00:56:26 +0000 (0:00:00.737) 0:00:19.463 ********** 2026-04-10 00:57:47.290630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-10 00:57:47.290650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-10 00:57:47.290669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-10 00:57:47.290677 | orchestrator | 2026-04-10 00:57:47.290683 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-04-10 00:57:47.290689 | orchestrator | Friday 10 April 2026 00:56:28 +0000 (0:00:01.711) 0:00:21.175 ********** 2026-04-10 00:57:47.290703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-10 00:57:47.290720 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:57:47.290730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-10 00:57:47.290737 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:57:47.290753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-10 00:57:47.290766 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:57:47.290772 | orchestrator | 2026-04-10 00:57:47.290777 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-04-10 00:57:47.290783 | orchestrator | Friday 10 April 2026 00:56:29 +0000 (0:00:00.955) 0:00:22.130 ********** 2026-04-10 00:57:47.290789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-10 00:57:47.290795 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:57:47.290810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-10 00:57:47.290822 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:57:47.290835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-10 00:57:47.290846 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:57:47.290852 | orchestrator | 2026-04-10 00:57:47.290858 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-04-10 00:57:47.290864 | orchestrator | Friday 10 April 2026 00:56:30 +0000 (0:00:01.175) 0:00:23.306 ********** 2026-04-10 00:57:47.290875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-10 00:57:47.290886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-10 00:57:47.290904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-10 00:57:47.290911 | orchestrator | 2026-04-10 00:57:47.290917 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-10 00:57:47.290923 | orchestrator | Friday 10 April 2026 00:56:31 +0000 (0:00:01.272) 0:00:24.578 ********** 2026-04-10 00:57:47.290930 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:57:47.290936 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:57:47.290942 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:57:47.290948 | orchestrator | 2026-04-10 00:57:47.290954 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-10 00:57:47.290960 | orchestrator | Friday 10 April 2026 00:56:32 +0000 (0:00:00.294) 0:00:24.872 ********** 2026-04-10 00:57:47.290967 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:57:47.290972 | orchestrator | 2026-04-10 00:57:47.290979 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-04-10 00:57:47.290985 | orchestrator | Friday 10 April 2026 00:56:32 +0000 (0:00:00.683) 0:00:25.556 ********** 2026-04-10 00:57:47.290992 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:57:47.291003 | orchestrator | 2026-04-10 00:57:47.291009 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-04-10 00:57:47.291015 | orchestrator | Friday 10 April 2026 00:56:35 +0000 (0:00:02.671) 0:00:28.228 ********** 2026-04-10 00:57:47.291021 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:57:47.291027 | orchestrator | 2026-04-10 00:57:47.291033 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-04-10 00:57:47.291040 | orchestrator | Friday 10 April 2026 00:56:37 +0000 (0:00:02.440) 0:00:30.668 ********** 2026-04-10 00:57:47.291046 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:57:47.291125 | orchestrator | 2026-04-10 00:57:47.291131 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-10 00:57:47.291135 | orchestrator | Friday 10 April 2026 00:56:54 +0000 (0:00:16.797) 0:00:47.466 ********** 2026-04-10 00:57:47.291139 | orchestrator | 2026-04-10 00:57:47.291146 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-10 00:57:47.291150 | orchestrator | Friday 10 April 2026 00:56:54 +0000 (0:00:00.063) 0:00:47.529 ********** 2026-04-10 00:57:47.291154 | orchestrator | 2026-04-10 00:57:47.291158 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-10 00:57:47.291161 | orchestrator | Friday 10 April 2026 00:56:54 +0000 (0:00:00.062) 0:00:47.592 ********** 2026-04-10 00:57:47.291165 | orchestrator | 2026-04-10 00:57:47.291169 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-04-10 00:57:47.291173 | orchestrator | Friday 10 April 2026 00:56:54 +0000 (0:00:00.063) 0:00:47.656 ********** 2026-04-10 00:57:47.291177 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:57:47.291180 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:57:47.291185 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:57:47.291188 | orchestrator | 2026-04-10 00:57:47.291192 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:57:47.291196 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-10 00:57:47.291201 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-10 00:57:47.291205 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-10 00:57:47.291209 | orchestrator | 2026-04-10 00:57:47.291213 | orchestrator | 2026-04-10 00:57:47.291221 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:57:47.291225 | orchestrator | Friday 10 April 2026 00:57:46 +0000 (0:00:51.163) 0:01:38.819 ********** 2026-04-10 00:57:47.291229 | orchestrator | =============================================================================== 2026-04-10 00:57:47.291233 | orchestrator | horizon : Restart horizon container ------------------------------------ 51.16s 2026-04-10 00:57:47.291237 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.80s 2026-04-10 00:57:47.291240 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.67s 2026-04-10 00:57:47.291244 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.44s 2026-04-10 00:57:47.291248 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.83s 2026-04-10 00:57:47.291251 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.71s 2026-04-10 00:57:47.291255 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.65s 2026-04-10 00:57:47.291259 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.62s 2026-04-10 00:57:47.291263 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.54s 2026-04-10 00:57:47.291266 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.39s 2026-04-10 00:57:47.291270 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.27s 2026-04-10 00:57:47.291279 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.18s 2026-04-10 00:57:47.291283 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.96s 2026-04-10 00:57:47.291287 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.74s 2026-04-10 00:57:47.291291 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.72s 2026-04-10 00:57:47.291294 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.68s 2026-04-10 00:57:47.291298 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.64s 2026-04-10 00:57:47.291302 | orchestrator | horizon : Update policy file name --------------------------------------- 0.49s 2026-04-10 00:57:47.291306 | orchestrator | horizon : Update policy file name --------------------------------------- 0.48s 2026-04-10 00:57:47.291309 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.43s 2026-04-10 00:57:47.291313 | orchestrator | 2026-04-10 00:57:47 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:57:50.330335 | orchestrator | 2026-04-10 00:57:50 | INFO  | Task fcece417-d2a3-41fc-b578-c97d18b614eb is in state STARTED 2026-04-10 00:57:50.332022 | orchestrator | 2026-04-10 00:57:50 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:57:50.332092 | orchestrator | 2026-04-10 00:57:50 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:57:53.379554 | orchestrator | 2026-04-10 00:57:53 | INFO  | Task fcece417-d2a3-41fc-b578-c97d18b614eb is in state STARTED 2026-04-10 00:57:53.380861 | orchestrator | 2026-04-10 00:57:53 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:57:53.380915 | orchestrator | 2026-04-10 00:57:53 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:57:56.433171 | orchestrator | 2026-04-10 00:57:56 | INFO  | Task fcece417-d2a3-41fc-b578-c97d18b614eb is in state STARTED 2026-04-10 00:57:56.435800 | orchestrator | 2026-04-10 00:57:56 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:57:56.435895 | orchestrator | 2026-04-10 00:57:56 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:57:59.483782 | orchestrator | 2026-04-10 00:57:59 | INFO  | Task fcece417-d2a3-41fc-b578-c97d18b614eb is in state STARTED 2026-04-10 00:57:59.485412 | orchestrator | 2026-04-10 00:57:59 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:57:59.485460 | orchestrator | 2026-04-10 00:57:59 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:58:02.532802 | orchestrator | 2026-04-10 00:58:02 | INFO  | Task fcece417-d2a3-41fc-b578-c97d18b614eb is in state STARTED 2026-04-10 00:58:02.533739 | orchestrator | 2026-04-10 00:58:02 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:58:02.533779 | orchestrator | 2026-04-10 00:58:02 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:58:05.574468 | orchestrator | 2026-04-10 00:58:05 | INFO  | Task fcece417-d2a3-41fc-b578-c97d18b614eb is in state STARTED 2026-04-10 00:58:05.576471 | orchestrator | 2026-04-10 00:58:05 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:58:05.576523 | orchestrator | 2026-04-10 00:58:05 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:58:08.623461 | orchestrator | 2026-04-10 00:58:08 | INFO  | Task fcece417-d2a3-41fc-b578-c97d18b614eb is in state STARTED 2026-04-10 00:58:08.625133 | orchestrator | 2026-04-10 00:58:08 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:58:08.625173 | orchestrator | 2026-04-10 00:58:08 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:58:11.668431 | orchestrator | 2026-04-10 00:58:11 | INFO  | Task fcece417-d2a3-41fc-b578-c97d18b614eb is in state STARTED 2026-04-10 00:58:11.669960 | orchestrator | 2026-04-10 00:58:11 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:58:11.669993 | orchestrator | 2026-04-10 00:58:11 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:58:14.713405 | orchestrator | 2026-04-10 00:58:14 | INFO  | Task fcece417-d2a3-41fc-b578-c97d18b614eb is in state STARTED 2026-04-10 00:58:14.717104 | orchestrator | 2026-04-10 00:58:14 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:58:14.717202 | orchestrator | 2026-04-10 00:58:14 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:58:17.764541 | orchestrator | 2026-04-10 00:58:17 | INFO  | Task fcece417-d2a3-41fc-b578-c97d18b614eb is in state STARTED 2026-04-10 00:58:17.767678 | orchestrator | 2026-04-10 00:58:17 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:58:17.767951 | orchestrator | 2026-04-10 00:58:17 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:58:20.812085 | orchestrator | 2026-04-10 00:58:20 | INFO  | Task fcece417-d2a3-41fc-b578-c97d18b614eb is in state STARTED 2026-04-10 00:58:20.814277 | orchestrator | 2026-04-10 00:58:20 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:58:20.814355 | orchestrator | 2026-04-10 00:58:20 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:58:23.866198 | orchestrator | 2026-04-10 00:58:23 | INFO  | Task fcece417-d2a3-41fc-b578-c97d18b614eb is in state STARTED 2026-04-10 00:58:23.866855 | orchestrator | 2026-04-10 00:58:23 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:58:23.867321 | orchestrator | 2026-04-10 00:58:23 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:58:26.917321 | orchestrator | 2026-04-10 00:58:26 | INFO  | Task fcece417-d2a3-41fc-b578-c97d18b614eb is in state STARTED 2026-04-10 00:58:26.918606 | orchestrator | 2026-04-10 00:58:26 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:58:26.918645 | orchestrator | 2026-04-10 00:58:26 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:58:29.974519 | orchestrator | 2026-04-10 00:58:29 | INFO  | Task fcece417-d2a3-41fc-b578-c97d18b614eb is in state SUCCESS 2026-04-10 00:58:29.976070 | orchestrator | 2026-04-10 00:58:29 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:58:29.978242 | orchestrator | 2026-04-10 00:58:29 | INFO  | Task 8d23b466-6b45-41e6-b35b-a99dc1eda02a is in state STARTED 2026-04-10 00:58:29.980992 | orchestrator | 2026-04-10 00:58:29 | INFO  | Task 3f5f3cec-b5ff-44f8-8022-a85954f2fe55 is in state STARTED 2026-04-10 00:58:29.982181 | orchestrator | 2026-04-10 00:58:29 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 00:58:29.982445 | orchestrator | 2026-04-10 00:58:29 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:58:33.042408 | orchestrator | 2026-04-10 00:58:33 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:58:33.044049 | orchestrator | 2026-04-10 00:58:33 | INFO  | Task 8d23b466-6b45-41e6-b35b-a99dc1eda02a is in state STARTED 2026-04-10 00:58:33.045127 | orchestrator | 2026-04-10 00:58:33 | INFO  | Task 3f5f3cec-b5ff-44f8-8022-a85954f2fe55 is in state STARTED 2026-04-10 00:58:33.046441 | orchestrator | 2026-04-10 00:58:33 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 00:58:33.046506 | orchestrator | 2026-04-10 00:58:33 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:58:36.080936 | orchestrator | 2026-04-10 00:58:36 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:58:36.081659 | orchestrator | 2026-04-10 00:58:36 | INFO  | Task ee69e66b-6407-4ee9-9b74-e39688de799d is in state STARTED 2026-04-10 00:58:36.082604 | orchestrator | 2026-04-10 00:58:36 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 00:58:36.083655 | orchestrator | 2026-04-10 00:58:36 | INFO  | Task 8d23b466-6b45-41e6-b35b-a99dc1eda02a is in state STARTED 2026-04-10 00:58:36.084508 | orchestrator | 2026-04-10 00:58:36 | INFO  | Task 3f5f3cec-b5ff-44f8-8022-a85954f2fe55 is in state SUCCESS 2026-04-10 00:58:36.085532 | orchestrator | 2026-04-10 00:58:36 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 00:58:36.085744 | orchestrator | 2026-04-10 00:58:36 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:58:39.122796 | orchestrator | 2026-04-10 00:58:39 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:58:39.122893 | orchestrator | 2026-04-10 00:58:39 | INFO  | Task ee69e66b-6407-4ee9-9b74-e39688de799d is in state STARTED 2026-04-10 00:58:39.123908 | orchestrator | 2026-04-10 00:58:39 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 00:58:39.124878 | orchestrator | 2026-04-10 00:58:39 | INFO  | Task 8d23b466-6b45-41e6-b35b-a99dc1eda02a is in state STARTED 2026-04-10 00:58:39.125874 | orchestrator | 2026-04-10 00:58:39 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 00:58:39.125924 | orchestrator | 2026-04-10 00:58:39 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:58:42.169943 | orchestrator | 2026-04-10 00:58:42 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:58:42.175226 | orchestrator | 2026-04-10 00:58:42 | INFO  | Task ee69e66b-6407-4ee9-9b74-e39688de799d is in state STARTED 2026-04-10 00:58:42.179415 | orchestrator | 2026-04-10 00:58:42 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 00:58:42.179461 | orchestrator | 2026-04-10 00:58:42 | INFO  | Task 8d23b466-6b45-41e6-b35b-a99dc1eda02a is in state STARTED 2026-04-10 00:58:42.183326 | orchestrator | 2026-04-10 00:58:42 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 00:58:42.183376 | orchestrator | 2026-04-10 00:58:42 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:58:45.216518 | orchestrator | 2026-04-10 00:58:45 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:58:45.218682 | orchestrator | 2026-04-10 00:58:45 | INFO  | Task ee69e66b-6407-4ee9-9b74-e39688de799d is in state STARTED 2026-04-10 00:58:45.220913 | orchestrator | 2026-04-10 00:58:45 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 00:58:45.224334 | orchestrator | 2026-04-10 00:58:45 | INFO  | Task 8d23b466-6b45-41e6-b35b-a99dc1eda02a is in state STARTED 2026-04-10 00:58:45.226059 | orchestrator | 2026-04-10 00:58:45 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 00:58:45.226127 | orchestrator | 2026-04-10 00:58:45 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:58:48.270089 | orchestrator | 2026-04-10 00:58:48 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:58:48.271464 | orchestrator | 2026-04-10 00:58:48 | INFO  | Task ee69e66b-6407-4ee9-9b74-e39688de799d is in state STARTED 2026-04-10 00:58:48.272155 | orchestrator | 2026-04-10 00:58:48 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 00:58:48.272674 | orchestrator | 2026-04-10 00:58:48 | INFO  | Task 8d23b466-6b45-41e6-b35b-a99dc1eda02a is in state STARTED 2026-04-10 00:58:48.274567 | orchestrator | 2026-04-10 00:58:48 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 00:58:48.274610 | orchestrator | 2026-04-10 00:58:48 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:58:51.331936 | orchestrator | 2026-04-10 00:58:51 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:58:51.332007 | orchestrator | 2026-04-10 00:58:51 | INFO  | Task ee69e66b-6407-4ee9-9b74-e39688de799d is in state STARTED 2026-04-10 00:58:51.332013 | orchestrator | 2026-04-10 00:58:51 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 00:58:51.332017 | orchestrator | 2026-04-10 00:58:51 | INFO  | Task 8d23b466-6b45-41e6-b35b-a99dc1eda02a is in state STARTED 2026-04-10 00:58:51.332021 | orchestrator | 2026-04-10 00:58:51 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 00:58:51.332026 | orchestrator | 2026-04-10 00:58:51 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:58:54.386899 | orchestrator | 2026-04-10 00:58:54 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:58:54.388668 | orchestrator | 2026-04-10 00:58:54 | INFO  | Task ee69e66b-6407-4ee9-9b74-e39688de799d is in state STARTED 2026-04-10 00:58:54.390190 | orchestrator | 2026-04-10 00:58:54 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 00:58:54.392093 | orchestrator | 2026-04-10 00:58:54 | INFO  | Task 8d23b466-6b45-41e6-b35b-a99dc1eda02a is in state STARTED 2026-04-10 00:58:54.393163 | orchestrator | 2026-04-10 00:58:54 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 00:58:54.393179 | orchestrator | 2026-04-10 00:58:54 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:58:57.440478 | orchestrator | 2026-04-10 00:58:57 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:58:57.441186 | orchestrator | 2026-04-10 00:58:57 | INFO  | Task ee69e66b-6407-4ee9-9b74-e39688de799d is in state STARTED 2026-04-10 00:58:57.442104 | orchestrator | 2026-04-10 00:58:57 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 00:58:57.443916 | orchestrator | 2026-04-10 00:58:57 | INFO  | Task 8d23b466-6b45-41e6-b35b-a99dc1eda02a is in state STARTED 2026-04-10 00:58:57.446947 | orchestrator | 2026-04-10 00:58:57 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 00:58:57.447034 | orchestrator | 2026-04-10 00:58:57 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:59:00.481603 | orchestrator | 2026-04-10 00:59:00 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:59:00.482247 | orchestrator | 2026-04-10 00:59:00 | INFO  | Task ee69e66b-6407-4ee9-9b74-e39688de799d is in state STARTED 2026-04-10 00:59:00.483496 | orchestrator | 2026-04-10 00:59:00 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 00:59:00.484531 | orchestrator | 2026-04-10 00:59:00 | INFO  | Task 8d23b466-6b45-41e6-b35b-a99dc1eda02a is in state STARTED 2026-04-10 00:59:00.485433 | orchestrator | 2026-04-10 00:59:00 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 00:59:00.485462 | orchestrator | 2026-04-10 00:59:00 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:59:03.530306 | orchestrator | 2026-04-10 00:59:03 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:59:03.530396 | orchestrator | 2026-04-10 00:59:03 | INFO  | Task ee69e66b-6407-4ee9-9b74-e39688de799d is in state STARTED 2026-04-10 00:59:03.530436 | orchestrator | 2026-04-10 00:59:03 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 00:59:03.530443 | orchestrator | 2026-04-10 00:59:03 | INFO  | Task 8d23b466-6b45-41e6-b35b-a99dc1eda02a is in state STARTED 2026-04-10 00:59:03.530449 | orchestrator | 2026-04-10 00:59:03 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 00:59:03.530456 | orchestrator | 2026-04-10 00:59:03 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:59:06.619037 | orchestrator | 2026-04-10 00:59:06 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:59:06.619237 | orchestrator | 2026-04-10 00:59:06 | INFO  | Task ee69e66b-6407-4ee9-9b74-e39688de799d is in state STARTED 2026-04-10 00:59:06.619858 | orchestrator | 2026-04-10 00:59:06 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 00:59:06.620657 | orchestrator | 2026-04-10 00:59:06 | INFO  | Task 8d23b466-6b45-41e6-b35b-a99dc1eda02a is in state STARTED 2026-04-10 00:59:06.621357 | orchestrator | 2026-04-10 00:59:06 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 00:59:06.621394 | orchestrator | 2026-04-10 00:59:06 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:59:09.685866 | orchestrator | 2026-04-10 00:59:09 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state STARTED 2026-04-10 00:59:09.685925 | orchestrator | 2026-04-10 00:59:09 | INFO  | Task ee69e66b-6407-4ee9-9b74-e39688de799d is in state STARTED 2026-04-10 00:59:09.687024 | orchestrator | 2026-04-10 00:59:09 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 00:59:09.688400 | orchestrator | 2026-04-10 00:59:09 | INFO  | Task 8d23b466-6b45-41e6-b35b-a99dc1eda02a is in state STARTED 2026-04-10 00:59:09.688445 | orchestrator | 2026-04-10 00:59:09 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 00:59:09.688454 | orchestrator | 2026-04-10 00:59:09 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:59:12.738507 | orchestrator | 2026-04-10 00:59:12 | INFO  | Task f51b8d0e-a693-4bc4-8bd7-ceadd2ef640e is in state SUCCESS 2026-04-10 00:59:12.739816 | orchestrator | 2026-04-10 00:59:12.739866 | orchestrator | 2026-04-10 00:59:12.739873 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-04-10 00:59:12.739879 | orchestrator | 2026-04-10 00:59:12.739883 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-04-10 00:59:12.739889 | orchestrator | Friday 10 April 2026 00:57:32 +0000 (0:00:00.318) 0:00:00.318 ********** 2026-04-10 00:59:12.739894 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-04-10 00:59:12.739900 | orchestrator | 2026-04-10 00:59:12.739905 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-04-10 00:59:12.739909 | orchestrator | Friday 10 April 2026 00:57:32 +0000 (0:00:00.238) 0:00:00.557 ********** 2026-04-10 00:59:12.739914 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-04-10 00:59:12.739919 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-04-10 00:59:12.739924 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-04-10 00:59:12.739929 | orchestrator | 2026-04-10 00:59:12.739933 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-04-10 00:59:12.739937 | orchestrator | Friday 10 April 2026 00:57:34 +0000 (0:00:01.622) 0:00:02.179 ********** 2026-04-10 00:59:12.739941 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-04-10 00:59:12.740002 | orchestrator | 2026-04-10 00:59:12.740013 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-04-10 00:59:12.740017 | orchestrator | Friday 10 April 2026 00:57:35 +0000 (0:00:01.272) 0:00:03.452 ********** 2026-04-10 00:59:12.740021 | orchestrator | changed: [testbed-manager] 2026-04-10 00:59:12.740025 | orchestrator | 2026-04-10 00:59:12.740029 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-04-10 00:59:12.740033 | orchestrator | Friday 10 April 2026 00:57:36 +0000 (0:00:00.995) 0:00:04.448 ********** 2026-04-10 00:59:12.740037 | orchestrator | changed: [testbed-manager] 2026-04-10 00:59:12.740040 | orchestrator | 2026-04-10 00:59:12.740044 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-04-10 00:59:12.740048 | orchestrator | Friday 10 April 2026 00:57:37 +0000 (0:00:00.952) 0:00:05.400 ********** 2026-04-10 00:59:12.740052 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-04-10 00:59:12.740056 | orchestrator | ok: [testbed-manager] 2026-04-10 00:59:12.740136 | orchestrator | 2026-04-10 00:59:12.740143 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-04-10 00:59:12.740148 | orchestrator | Friday 10 April 2026 00:58:18 +0000 (0:00:41.535) 0:00:46.936 ********** 2026-04-10 00:59:12.740154 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-04-10 00:59:12.740160 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-04-10 00:59:12.740166 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-04-10 00:59:12.740172 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-04-10 00:59:12.740178 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-04-10 00:59:12.740183 | orchestrator | 2026-04-10 00:59:12.740191 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-04-10 00:59:12.740195 | orchestrator | Friday 10 April 2026 00:58:23 +0000 (0:00:04.306) 0:00:51.242 ********** 2026-04-10 00:59:12.740199 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-04-10 00:59:12.740203 | orchestrator | 2026-04-10 00:59:12.740207 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-04-10 00:59:12.740211 | orchestrator | Friday 10 April 2026 00:58:23 +0000 (0:00:00.636) 0:00:51.879 ********** 2026-04-10 00:59:12.740215 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:59:12.740219 | orchestrator | 2026-04-10 00:59:12.740223 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-04-10 00:59:12.740226 | orchestrator | Friday 10 April 2026 00:58:23 +0000 (0:00:00.130) 0:00:52.009 ********** 2026-04-10 00:59:12.740230 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:59:12.740234 | orchestrator | 2026-04-10 00:59:12.740238 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-04-10 00:59:12.740241 | orchestrator | Friday 10 April 2026 00:58:24 +0000 (0:00:00.290) 0:00:52.300 ********** 2026-04-10 00:59:12.740245 | orchestrator | changed: [testbed-manager] 2026-04-10 00:59:12.740249 | orchestrator | 2026-04-10 00:59:12.740252 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-04-10 00:59:12.740268 | orchestrator | Friday 10 April 2026 00:58:25 +0000 (0:00:01.377) 0:00:53.677 ********** 2026-04-10 00:59:12.740272 | orchestrator | changed: [testbed-manager] 2026-04-10 00:59:12.740275 | orchestrator | 2026-04-10 00:59:12.740279 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-04-10 00:59:12.740284 | orchestrator | Friday 10 April 2026 00:58:26 +0000 (0:00:00.712) 0:00:54.390 ********** 2026-04-10 00:59:12.740290 | orchestrator | changed: [testbed-manager] 2026-04-10 00:59:12.740296 | orchestrator | 2026-04-10 00:59:12.740302 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-04-10 00:59:12.740374 | orchestrator | Friday 10 April 2026 00:58:26 +0000 (0:00:00.565) 0:00:54.955 ********** 2026-04-10 00:59:12.740383 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-04-10 00:59:12.740664 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-04-10 00:59:12.740684 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-04-10 00:59:12.740688 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-04-10 00:59:12.740692 | orchestrator | 2026-04-10 00:59:12.740696 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:59:12.740701 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 00:59:12.740706 | orchestrator | 2026-04-10 00:59:12.740710 | orchestrator | 2026-04-10 00:59:12.740737 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:59:12.740742 | orchestrator | Friday 10 April 2026 00:58:28 +0000 (0:00:01.490) 0:00:56.446 ********** 2026-04-10 00:59:12.740746 | orchestrator | =============================================================================== 2026-04-10 00:59:12.740750 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.54s 2026-04-10 00:59:12.740754 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.31s 2026-04-10 00:59:12.740758 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.62s 2026-04-10 00:59:12.740762 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.49s 2026-04-10 00:59:12.740766 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.38s 2026-04-10 00:59:12.740769 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.27s 2026-04-10 00:59:12.740773 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.00s 2026-04-10 00:59:12.740777 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.95s 2026-04-10 00:59:12.740781 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.71s 2026-04-10 00:59:12.740784 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.64s 2026-04-10 00:59:12.740788 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.57s 2026-04-10 00:59:12.740792 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.29s 2026-04-10 00:59:12.740795 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.24s 2026-04-10 00:59:12.740799 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2026-04-10 00:59:12.740803 | orchestrator | 2026-04-10 00:59:12.740809 | orchestrator | 2026-04-10 00:59:12.740816 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-10 00:59:12.740822 | orchestrator | 2026-04-10 00:59:12.740828 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-10 00:59:12.740834 | orchestrator | Friday 10 April 2026 00:58:31 +0000 (0:00:00.180) 0:00:00.180 ********** 2026-04-10 00:59:12.740840 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:59:12.740846 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:59:12.740852 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:59:12.740858 | orchestrator | 2026-04-10 00:59:12.740865 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-10 00:59:12.740871 | orchestrator | Friday 10 April 2026 00:58:32 +0000 (0:00:00.357) 0:00:00.538 ********** 2026-04-10 00:59:12.740878 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-10 00:59:12.740884 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-10 00:59:12.740891 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-10 00:59:12.740897 | orchestrator | 2026-04-10 00:59:12.740903 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-04-10 00:59:12.740907 | orchestrator | 2026-04-10 00:59:12.740911 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-04-10 00:59:12.740914 | orchestrator | Friday 10 April 2026 00:58:32 +0000 (0:00:00.552) 0:00:01.090 ********** 2026-04-10 00:59:12.740918 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:59:12.740922 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:59:12.740926 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:59:12.740935 | orchestrator | 2026-04-10 00:59:12.740939 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:59:12.740944 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:59:12.740987 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:59:12.740992 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:59:12.740996 | orchestrator | 2026-04-10 00:59:12.741000 | orchestrator | 2026-04-10 00:59:12.741027 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:59:12.741035 | orchestrator | Friday 10 April 2026 00:58:33 +0000 (0:00:01.064) 0:00:02.154 ********** 2026-04-10 00:59:12.741041 | orchestrator | =============================================================================== 2026-04-10 00:59:12.741047 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 1.06s 2026-04-10 00:59:12.741062 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.55s 2026-04-10 00:59:12.741068 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2026-04-10 00:59:12.741074 | orchestrator | 2026-04-10 00:59:12.741080 | orchestrator | 2026-04-10 00:59:12.741085 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-10 00:59:12.741091 | orchestrator | 2026-04-10 00:59:12.741097 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-10 00:59:12.741103 | orchestrator | Friday 10 April 2026 00:56:07 +0000 (0:00:00.298) 0:00:00.298 ********** 2026-04-10 00:59:12.741109 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:59:12.741115 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:59:12.741121 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:59:12.741129 | orchestrator | 2026-04-10 00:59:12.741133 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-10 00:59:12.741136 | orchestrator | Friday 10 April 2026 00:56:07 +0000 (0:00:00.279) 0:00:00.577 ********** 2026-04-10 00:59:12.741140 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-10 00:59:12.741144 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-10 00:59:12.741148 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-10 00:59:12.741151 | orchestrator | 2026-04-10 00:59:12.741155 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-04-10 00:59:12.741159 | orchestrator | 2026-04-10 00:59:12.741190 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-10 00:59:12.741197 | orchestrator | Friday 10 April 2026 00:56:08 +0000 (0:00:00.304) 0:00:00.882 ********** 2026-04-10 00:59:12.741203 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:59:12.741212 | orchestrator | 2026-04-10 00:59:12.741220 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-04-10 00:59:12.741225 | orchestrator | Friday 10 April 2026 00:56:08 +0000 (0:00:00.646) 0:00:01.528 ********** 2026-04-10 00:59:12.741237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-10 00:59:12.741256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-10 00:59:12.741269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-10 00:59:12.741280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-10 00:59:12.741313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-10 00:59:12.741321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-10 00:59:12.741335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-10 00:59:12.741344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-10 00:59:12.741351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-10 00:59:12.741359 | orchestrator | 2026-04-10 00:59:12.741366 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-04-10 00:59:12.741377 | orchestrator | Friday 10 April 2026 00:56:10 +0000 (0:00:02.053) 0:00:03.582 ********** 2026-04-10 00:59:12.741385 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:59:12.741392 | orchestrator | 2026-04-10 00:59:12.741397 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-04-10 00:59:12.741405 | orchestrator | Friday 10 April 2026 00:56:11 +0000 (0:00:00.115) 0:00:03.697 ********** 2026-04-10 00:59:12.741412 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:59:12.741418 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:59:12.741425 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:59:12.741433 | orchestrator | 2026-04-10 00:59:12.741440 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-04-10 00:59:12.741447 | orchestrator | Friday 10 April 2026 00:56:11 +0000 (0:00:00.277) 0:00:03.974 ********** 2026-04-10 00:59:12.741454 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-10 00:59:12.741460 | orchestrator | 2026-04-10 00:59:12.741466 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-10 00:59:12.741473 | orchestrator | Friday 10 April 2026 00:56:12 +0000 (0:00:00.910) 0:00:04.885 ********** 2026-04-10 00:59:12.741480 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:59:12.741486 | orchestrator | 2026-04-10 00:59:12.741492 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-04-10 00:59:12.741503 | orchestrator | Friday 10 April 2026 00:56:12 +0000 (0:00:00.639) 0:00:05.525 ********** 2026-04-10 00:59:12.741510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-10 00:59:12.741523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-10 00:59:12.741531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-10 00:59:12.741541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-10 00:59:12.741557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-10 00:59:12.741569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-10 00:59:12.741576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-10 00:59:12.741583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-10 00:59:12.741590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-10 00:59:12.741596 | orchestrator | 2026-04-10 00:59:12.741602 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-04-10 00:59:12.741609 | orchestrator | Friday 10 April 2026 00:56:16 +0000 (0:00:03.519) 0:00:09.045 ********** 2026-04-10 00:59:12.741619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-10 00:59:12.741631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-10 00:59:12.741654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-10 00:59:12.741660 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:59:12.741667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-10 00:59:12.741674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-10 00:59:12.741688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-10 00:59:12.741694 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:59:12.741706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-10 00:59:12.741718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-10 00:59:12.741725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-10 00:59:12.741732 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:59:12.741739 | orchestrator | 2026-04-10 00:59:12.741744 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-04-10 00:59:12.741750 | orchestrator | Friday 10 April 2026 00:56:16 +0000 (0:00:00.568) 0:00:09.614 ********** 2026-04-10 00:59:12.741757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-10 00:59:12.741766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-10 00:59:12.741773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-10 00:59:12.741784 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:59:12.741795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-10 00:59:12.741802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-10 00:59:12.741808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-10 00:59:12.741815 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:59:12.741821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-10 00:59:12.741832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-10 00:59:12.741846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-10 00:59:12.741853 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:59:12.741859 | orchestrator | 2026-04-10 00:59:12.741865 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-04-10 00:59:12.741871 | orchestrator | Friday 10 April 2026 00:56:17 +0000 (0:00:00.745) 0:00:10.359 ********** 2026-04-10 00:59:12.741878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-10 00:59:12.741885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-10 00:59:12.741895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-10 00:59:12.741910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-10 00:59:12.741917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-10 00:59:12.741923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-10 00:59:12.741929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-10 00:59:12.741935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-10 00:59:12.741942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-10 00:59:12.741972 | orchestrator | 2026-04-10 00:59:12.741979 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-04-10 00:59:12.741988 | orchestrator | Friday 10 April 2026 00:56:20 +0000 (0:00:02.828) 0:00:13.188 ********** 2026-04-10 00:59:12.742000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-10 00:59:12.742007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-10 00:59:12.742059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-10 00:59:12.742069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-10 00:59:12.742079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-10 00:59:12.742092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-10 00:59:12.742104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-10 00:59:12.742111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-10 00:59:12.742118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-10 00:59:12.742124 | orchestrator | 2026-04-10 00:59:12.742130 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-04-10 00:59:12.742137 | orchestrator | Friday 10 April 2026 00:56:25 +0000 (0:00:04.739) 0:00:17.928 ********** 2026-04-10 00:59:12.742143 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:59:12.742149 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:59:12.742155 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:59:12.742162 | orchestrator | 2026-04-10 00:59:12.742168 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-04-10 00:59:12.742174 | orchestrator | Friday 10 April 2026 00:56:26 +0000 (0:00:01.352) 0:00:19.280 ********** 2026-04-10 00:59:12.742180 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:59:12.742187 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:59:12.742193 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:59:12.742204 | orchestrator | 2026-04-10 00:59:12.742210 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-04-10 00:59:12.742216 | orchestrator | Friday 10 April 2026 00:56:27 +0000 (0:00:01.112) 0:00:20.392 ********** 2026-04-10 00:59:12.742222 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:59:12.742229 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:59:12.742235 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:59:12.742241 | orchestrator | 2026-04-10 00:59:12.742322 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-04-10 00:59:12.742330 | orchestrator | Friday 10 April 2026 00:56:28 +0000 (0:00:00.311) 0:00:20.704 ********** 2026-04-10 00:59:12.742336 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:59:12.742343 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:59:12.742349 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:59:12.742355 | orchestrator | 2026-04-10 00:59:12.742361 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-04-10 00:59:12.742367 | orchestrator | Friday 10 April 2026 00:56:28 +0000 (0:00:00.277) 0:00:20.982 ********** 2026-04-10 00:59:12.742377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-10 00:59:12.742389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-10 00:59:12.742396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-10 00:59:12.742402 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:59:12.742408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-10 00:59:12.742421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-10 00:59:12.742433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-10 00:59:12.742439 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:59:12.742450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-10 00:59:12.742456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-10 00:59:12.742463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-10 00:59:12.742474 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:59:12.742480 | orchestrator | 2026-04-10 00:59:12.742486 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-10 00:59:12.742492 | orchestrator | Friday 10 April 2026 00:56:29 +0000 (0:00:00.684) 0:00:21.666 ********** 2026-04-10 00:59:12.742498 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:59:12.742504 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:59:12.742511 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:59:12.742517 | orchestrator | 2026-04-10 00:59:12.742522 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-04-10 00:59:12.742528 | orchestrator | Friday 10 April 2026 00:56:29 +0000 (0:00:00.449) 0:00:22.116 ********** 2026-04-10 00:59:12.742535 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-10 00:59:12.742541 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-10 00:59:12.742547 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-10 00:59:12.742554 | orchestrator | 2026-04-10 00:59:12.742560 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-04-10 00:59:12.742566 | orchestrator | Friday 10 April 2026 00:56:31 +0000 (0:00:01.847) 0:00:23.963 ********** 2026-04-10 00:59:12.742572 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-10 00:59:12.742578 | orchestrator | 2026-04-10 00:59:12.742584 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-04-10 00:59:12.742590 | orchestrator | Friday 10 April 2026 00:56:32 +0000 (0:00:00.965) 0:00:24.929 ********** 2026-04-10 00:59:12.742596 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:59:12.742602 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:59:12.742608 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:59:12.742614 | orchestrator | 2026-04-10 00:59:12.742620 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-04-10 00:59:12.742626 | orchestrator | Friday 10 April 2026 00:56:32 +0000 (0:00:00.607) 0:00:25.536 ********** 2026-04-10 00:59:12.742632 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-10 00:59:12.742638 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-10 00:59:12.742644 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-10 00:59:12.742650 | orchestrator | 2026-04-10 00:59:12.742657 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-04-10 00:59:12.742663 | orchestrator | Friday 10 April 2026 00:56:33 +0000 (0:00:01.104) 0:00:26.641 ********** 2026-04-10 00:59:12.742669 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:59:12.742676 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:59:12.742682 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:59:12.742688 | orchestrator | 2026-04-10 00:59:12.742694 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-04-10 00:59:12.742700 | orchestrator | Friday 10 April 2026 00:56:34 +0000 (0:00:00.463) 0:00:27.105 ********** 2026-04-10 00:59:12.742706 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-10 00:59:12.742713 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-10 00:59:12.742719 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-10 00:59:12.742725 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-10 00:59:12.742731 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-10 00:59:12.742741 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-10 00:59:12.742747 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-10 00:59:12.742753 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-10 00:59:12.742794 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-10 00:59:12.742801 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-10 00:59:12.742806 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-10 00:59:12.742812 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-10 00:59:12.742818 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-10 00:59:12.742824 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-10 00:59:12.742830 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-10 00:59:12.742837 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-10 00:59:12.742843 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-10 00:59:12.742850 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-10 00:59:12.742856 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-10 00:59:12.742861 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-10 00:59:12.742866 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-10 00:59:12.742872 | orchestrator | 2026-04-10 00:59:12.742878 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-04-10 00:59:12.742884 | orchestrator | Friday 10 April 2026 00:56:43 +0000 (0:00:09.245) 0:00:36.350 ********** 2026-04-10 00:59:12.742890 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-10 00:59:12.742896 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-10 00:59:12.742902 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-10 00:59:12.742909 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-10 00:59:12.742916 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-10 00:59:12.742922 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-10 00:59:12.742927 | orchestrator | 2026-04-10 00:59:12.742933 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-04-10 00:59:12.742939 | orchestrator | Friday 10 April 2026 00:56:46 +0000 (0:00:02.479) 0:00:38.830 ********** 2026-04-10 00:59:12.743033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-10 00:59:12.743054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-10 00:59:12.743069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-10 00:59:12.743076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-10 00:59:12.743084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-10 00:59:12.743172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-10 00:59:12.743180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-10 00:59:12.743195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-10 00:59:12.743200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-10 00:59:12.743204 | orchestrator | 2026-04-10 00:59:12.743208 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-10 00:59:12.743212 | orchestrator | Friday 10 April 2026 00:56:48 +0000 (0:00:01.981) 0:00:40.811 ********** 2026-04-10 00:59:12.743216 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:59:12.743220 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:59:12.743224 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:59:12.743228 | orchestrator | 2026-04-10 00:59:12.743232 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-04-10 00:59:12.743235 | orchestrator | Friday 10 April 2026 00:56:48 +0000 (0:00:00.349) 0:00:41.161 ********** 2026-04-10 00:59:12.743239 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:59:12.743243 | orchestrator | 2026-04-10 00:59:12.743247 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-04-10 00:59:12.743250 | orchestrator | Friday 10 April 2026 00:56:50 +0000 (0:00:02.474) 0:00:43.635 ********** 2026-04-10 00:59:12.743254 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:59:12.743258 | orchestrator | 2026-04-10 00:59:12.743262 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-04-10 00:59:12.743266 | orchestrator | Friday 10 April 2026 00:56:53 +0000 (0:00:02.507) 0:00:46.142 ********** 2026-04-10 00:59:12.743269 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:59:12.743273 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:59:12.743277 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:59:12.743281 | orchestrator | 2026-04-10 00:59:12.743285 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-04-10 00:59:12.743289 | orchestrator | Friday 10 April 2026 00:56:54 +0000 (0:00:00.824) 0:00:46.967 ********** 2026-04-10 00:59:12.743292 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:59:12.743296 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:59:12.743300 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:59:12.743304 | orchestrator | 2026-04-10 00:59:12.743307 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-04-10 00:59:12.743311 | orchestrator | Friday 10 April 2026 00:56:54 +0000 (0:00:00.328) 0:00:47.295 ********** 2026-04-10 00:59:12.743315 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:59:12.743319 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:59:12.743326 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:59:12.743330 | orchestrator | 2026-04-10 00:59:12.743334 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-04-10 00:59:12.743337 | orchestrator | Friday 10 April 2026 00:56:54 +0000 (0:00:00.327) 0:00:47.623 ********** 2026-04-10 00:59:12.743341 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:59:12.743345 | orchestrator | 2026-04-10 00:59:12.743349 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-04-10 00:59:12.743352 | orchestrator | Friday 10 April 2026 00:57:10 +0000 (0:00:15.801) 0:01:03.424 ********** 2026-04-10 00:59:12.743356 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:59:12.743360 | orchestrator | 2026-04-10 00:59:12.743364 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-10 00:59:12.743368 | orchestrator | Friday 10 April 2026 00:57:23 +0000 (0:00:12.607) 0:01:16.032 ********** 2026-04-10 00:59:12.743371 | orchestrator | 2026-04-10 00:59:12.743375 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-10 00:59:12.743382 | orchestrator | Friday 10 April 2026 00:57:23 +0000 (0:00:00.064) 0:01:16.096 ********** 2026-04-10 00:59:12.743386 | orchestrator | 2026-04-10 00:59:12.743390 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-10 00:59:12.743393 | orchestrator | Friday 10 April 2026 00:57:23 +0000 (0:00:00.063) 0:01:16.160 ********** 2026-04-10 00:59:12.743397 | orchestrator | 2026-04-10 00:59:12.743401 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-04-10 00:59:12.743405 | orchestrator | Friday 10 April 2026 00:57:23 +0000 (0:00:00.063) 0:01:16.224 ********** 2026-04-10 00:59:12.743409 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:59:12.743412 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:59:12.743416 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:59:12.743421 | orchestrator | 2026-04-10 00:59:12.743426 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-04-10 00:59:12.743432 | orchestrator | Friday 10 April 2026 00:58:02 +0000 (0:00:39.336) 0:01:55.561 ********** 2026-04-10 00:59:12.743442 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:59:12.743449 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:59:12.743455 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:59:12.743461 | orchestrator | 2026-04-10 00:59:12.743467 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-04-10 00:59:12.743473 | orchestrator | Friday 10 April 2026 00:58:10 +0000 (0:00:07.719) 0:02:03.280 ********** 2026-04-10 00:59:12.743483 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:59:12.743488 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:59:12.743494 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:59:12.743499 | orchestrator | 2026-04-10 00:59:12.743505 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-10 00:59:12.743510 | orchestrator | Friday 10 April 2026 00:58:21 +0000 (0:00:11.094) 0:02:14.375 ********** 2026-04-10 00:59:12.743516 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 00:59:12.743522 | orchestrator | 2026-04-10 00:59:12.743528 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-04-10 00:59:12.743533 | orchestrator | Friday 10 April 2026 00:58:22 +0000 (0:00:00.546) 0:02:14.921 ********** 2026-04-10 00:59:12.743538 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:59:12.743544 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:59:12.743549 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:59:12.743555 | orchestrator | 2026-04-10 00:59:12.743560 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-04-10 00:59:12.743566 | orchestrator | Friday 10 April 2026 00:58:23 +0000 (0:00:00.745) 0:02:15.667 ********** 2026-04-10 00:59:12.743571 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:59:12.743577 | orchestrator | 2026-04-10 00:59:12.743582 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-04-10 00:59:12.743597 | orchestrator | Friday 10 April 2026 00:58:24 +0000 (0:00:01.696) 0:02:17.363 ********** 2026-04-10 00:59:12.743602 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-04-10 00:59:12.743609 | orchestrator | 2026-04-10 00:59:12.743614 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-04-10 00:59:12.743620 | orchestrator | Friday 10 April 2026 00:58:38 +0000 (0:00:14.201) 0:02:31.565 ********** 2026-04-10 00:59:12.743625 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-04-10 00:59:12.743631 | orchestrator | 2026-04-10 00:59:12.743636 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-04-10 00:59:12.743641 | orchestrator | Friday 10 April 2026 00:58:57 +0000 (0:00:18.117) 0:02:49.682 ********** 2026-04-10 00:59:12.743647 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-04-10 00:59:12.743652 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-04-10 00:59:12.743657 | orchestrator | 2026-04-10 00:59:12.743663 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-04-10 00:59:12.743669 | orchestrator | Friday 10 April 2026 00:59:04 +0000 (0:00:07.839) 0:02:57.523 ********** 2026-04-10 00:59:12.743675 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:59:12.743681 | orchestrator | 2026-04-10 00:59:12.743686 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-04-10 00:59:12.743691 | orchestrator | Friday 10 April 2026 00:59:05 +0000 (0:00:00.333) 0:02:57.856 ********** 2026-04-10 00:59:12.743697 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:59:12.743703 | orchestrator | 2026-04-10 00:59:12.743708 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-04-10 00:59:12.743714 | orchestrator | Friday 10 April 2026 00:59:05 +0000 (0:00:00.176) 0:02:58.032 ********** 2026-04-10 00:59:12.743720 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:59:12.743726 | orchestrator | 2026-04-10 00:59:12.743731 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-04-10 00:59:12.743737 | orchestrator | Friday 10 April 2026 00:59:05 +0000 (0:00:00.136) 0:02:58.169 ********** 2026-04-10 00:59:12.743743 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:59:12.743749 | orchestrator | 2026-04-10 00:59:12.743754 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-04-10 00:59:12.743761 | orchestrator | Friday 10 April 2026 00:59:06 +0000 (0:00:00.605) 0:02:58.774 ********** 2026-04-10 00:59:12.743766 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:59:12.743772 | orchestrator | 2026-04-10 00:59:12.743777 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-10 00:59:12.743784 | orchestrator | Friday 10 April 2026 00:59:10 +0000 (0:00:04.209) 0:03:02.983 ********** 2026-04-10 00:59:12.743790 | orchestrator | skipping: [testbed-node-0] 2026-04-10 00:59:12.743796 | orchestrator | skipping: [testbed-node-1] 2026-04-10 00:59:12.743801 | orchestrator | skipping: [testbed-node-2] 2026-04-10 00:59:12.743807 | orchestrator | 2026-04-10 00:59:12.743813 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:59:12.743825 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-10 00:59:12.743834 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-10 00:59:12.743840 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-10 00:59:12.743846 | orchestrator | 2026-04-10 00:59:12.743851 | orchestrator | 2026-04-10 00:59:12.743856 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:59:12.743862 | orchestrator | Friday 10 April 2026 00:59:10 +0000 (0:00:00.490) 0:03:03.474 ********** 2026-04-10 00:59:12.743874 | orchestrator | =============================================================================== 2026-04-10 00:59:12.743880 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 39.34s 2026-04-10 00:59:12.743886 | orchestrator | service-ks-register : keystone | Creating services --------------------- 18.12s 2026-04-10 00:59:12.743891 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.80s 2026-04-10 00:59:12.743897 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 14.20s 2026-04-10 00:59:12.743909 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 12.61s 2026-04-10 00:59:12.743915 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.09s 2026-04-10 00:59:12.743921 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.25s 2026-04-10 00:59:12.743927 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.84s 2026-04-10 00:59:12.743933 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 7.72s 2026-04-10 00:59:12.743939 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.74s 2026-04-10 00:59:12.743945 | orchestrator | keystone : Creating default user role ----------------------------------- 4.21s 2026-04-10 00:59:12.743976 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.52s 2026-04-10 00:59:12.743982 | orchestrator | keystone : Copying over config.json files for services ------------------ 2.83s 2026-04-10 00:59:12.743988 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.51s 2026-04-10 00:59:12.743993 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.48s 2026-04-10 00:59:12.743999 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.47s 2026-04-10 00:59:12.744004 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.05s 2026-04-10 00:59:12.744010 | orchestrator | keystone : Check keystone containers ------------------------------------ 1.98s 2026-04-10 00:59:12.744016 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.85s 2026-04-10 00:59:12.744021 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.70s 2026-04-10 00:59:12.744026 | orchestrator | 2026-04-10 00:59:12 | INFO  | Task ee69e66b-6407-4ee9-9b74-e39688de799d is in state STARTED 2026-04-10 00:59:12.744032 | orchestrator | 2026-04-10 00:59:12 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 00:59:12.744038 | orchestrator | 2026-04-10 00:59:12 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 00:59:12.744043 | orchestrator | 2026-04-10 00:59:12 | INFO  | Task 8d23b466-6b45-41e6-b35b-a99dc1eda02a is in state STARTED 2026-04-10 00:59:12.744164 | orchestrator | 2026-04-10 00:59:12 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 00:59:12.744177 | orchestrator | 2026-04-10 00:59:12 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:59:15.793061 | orchestrator | 2026-04-10 00:59:15 | INFO  | Task ee69e66b-6407-4ee9-9b74-e39688de799d is in state STARTED 2026-04-10 00:59:15.796579 | orchestrator | 2026-04-10 00:59:15 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 00:59:15.797985 | orchestrator | 2026-04-10 00:59:15 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 00:59:15.799097 | orchestrator | 2026-04-10 00:59:15 | INFO  | Task 8d23b466-6b45-41e6-b35b-a99dc1eda02a is in state STARTED 2026-04-10 00:59:15.800565 | orchestrator | 2026-04-10 00:59:15 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 00:59:15.800602 | orchestrator | 2026-04-10 00:59:15 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:59:18.856213 | orchestrator | 2026-04-10 00:59:18 | INFO  | Task ee69e66b-6407-4ee9-9b74-e39688de799d is in state STARTED 2026-04-10 00:59:18.858503 | orchestrator | 2026-04-10 00:59:18 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 00:59:18.862149 | orchestrator | 2026-04-10 00:59:18 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 00:59:18.865458 | orchestrator | 2026-04-10 00:59:18 | INFO  | Task 8d23b466-6b45-41e6-b35b-a99dc1eda02a is in state STARTED 2026-04-10 00:59:18.869848 | orchestrator | 2026-04-10 00:59:18 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 00:59:18.870346 | orchestrator | 2026-04-10 00:59:18 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:59:21.916620 | orchestrator | 2026-04-10 00:59:21 | INFO  | Task ee69e66b-6407-4ee9-9b74-e39688de799d is in state STARTED 2026-04-10 00:59:21.917102 | orchestrator | 2026-04-10 00:59:21 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 00:59:21.918121 | orchestrator | 2026-04-10 00:59:21 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 00:59:21.919807 | orchestrator | 2026-04-10 00:59:21 | INFO  | Task 8d23b466-6b45-41e6-b35b-a99dc1eda02a is in state STARTED 2026-04-10 00:59:21.921221 | orchestrator | 2026-04-10 00:59:21 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 00:59:21.921628 | orchestrator | 2026-04-10 00:59:21 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:59:24.950222 | orchestrator | 2026-04-10 00:59:24 | INFO  | Task ee69e66b-6407-4ee9-9b74-e39688de799d is in state STARTED 2026-04-10 00:59:24.950312 | orchestrator | 2026-04-10 00:59:24 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 00:59:24.951106 | orchestrator | 2026-04-10 00:59:24 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 00:59:24.951543 | orchestrator | 2026-04-10 00:59:24 | INFO  | Task 8d23b466-6b45-41e6-b35b-a99dc1eda02a is in state SUCCESS 2026-04-10 00:59:24.952688 | orchestrator | 2026-04-10 00:59:24 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 00:59:24.952717 | orchestrator | 2026-04-10 00:59:24 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:59:27.973490 | orchestrator | 2026-04-10 00:59:27 | INFO  | Task ee69e66b-6407-4ee9-9b74-e39688de799d is in state SUCCESS 2026-04-10 00:59:27.973857 | orchestrator | 2026-04-10 00:59:27.973881 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-10 00:59:27.973889 | orchestrator | 2.16.14 2026-04-10 00:59:27.973897 | orchestrator | 2026-04-10 00:59:27.973904 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-04-10 00:59:27.973911 | orchestrator | 2026-04-10 00:59:27.973919 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-04-10 00:59:27.973926 | orchestrator | Friday 10 April 2026 00:58:33 +0000 (0:00:00.246) 0:00:00.246 ********** 2026-04-10 00:59:27.973963 | orchestrator | changed: [testbed-manager] 2026-04-10 00:59:27.973971 | orchestrator | 2026-04-10 00:59:27.973977 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-04-10 00:59:27.973983 | orchestrator | Friday 10 April 2026 00:58:34 +0000 (0:00:01.655) 0:00:01.901 ********** 2026-04-10 00:59:27.973990 | orchestrator | changed: [testbed-manager] 2026-04-10 00:59:27.973996 | orchestrator | 2026-04-10 00:59:27.974002 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-04-10 00:59:27.974008 | orchestrator | Friday 10 April 2026 00:58:35 +0000 (0:00:00.956) 0:00:02.857 ********** 2026-04-10 00:59:27.974053 | orchestrator | changed: [testbed-manager] 2026-04-10 00:59:27.974060 | orchestrator | 2026-04-10 00:59:27.974067 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-04-10 00:59:27.974098 | orchestrator | Friday 10 April 2026 00:58:36 +0000 (0:00:01.172) 0:00:04.030 ********** 2026-04-10 00:59:27.974105 | orchestrator | changed: [testbed-manager] 2026-04-10 00:59:27.974111 | orchestrator | 2026-04-10 00:59:27.974117 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-04-10 00:59:27.974124 | orchestrator | Friday 10 April 2026 00:58:37 +0000 (0:00:01.145) 0:00:05.175 ********** 2026-04-10 00:59:27.974130 | orchestrator | changed: [testbed-manager] 2026-04-10 00:59:27.974136 | orchestrator | 2026-04-10 00:59:27.974143 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-04-10 00:59:27.974210 | orchestrator | Friday 10 April 2026 00:58:39 +0000 (0:00:01.084) 0:00:06.259 ********** 2026-04-10 00:59:27.974218 | orchestrator | changed: [testbed-manager] 2026-04-10 00:59:27.974224 | orchestrator | 2026-04-10 00:59:27.974230 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-04-10 00:59:27.974237 | orchestrator | Friday 10 April 2026 00:58:40 +0000 (0:00:01.131) 0:00:07.391 ********** 2026-04-10 00:59:27.974243 | orchestrator | changed: [testbed-manager] 2026-04-10 00:59:27.974248 | orchestrator | 2026-04-10 00:59:27.974255 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-04-10 00:59:27.974261 | orchestrator | Friday 10 April 2026 00:58:42 +0000 (0:00:02.115) 0:00:09.507 ********** 2026-04-10 00:59:27.974267 | orchestrator | changed: [testbed-manager] 2026-04-10 00:59:27.974273 | orchestrator | 2026-04-10 00:59:27.974280 | orchestrator | TASK [Create admin user] ******************************************************* 2026-04-10 00:59:27.974287 | orchestrator | Friday 10 April 2026 00:58:43 +0000 (0:00:01.056) 0:00:10.564 ********** 2026-04-10 00:59:27.974293 | orchestrator | changed: [testbed-manager] 2026-04-10 00:59:27.974299 | orchestrator | 2026-04-10 00:59:27.974305 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-04-10 00:59:27.974312 | orchestrator | Friday 10 April 2026 00:58:56 +0000 (0:00:12.909) 0:00:23.473 ********** 2026-04-10 00:59:27.974318 | orchestrator | skipping: [testbed-manager] 2026-04-10 00:59:27.974325 | orchestrator | 2026-04-10 00:59:27.974331 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-10 00:59:27.974337 | orchestrator | 2026-04-10 00:59:27.974343 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-10 00:59:27.974363 | orchestrator | Friday 10 April 2026 00:58:56 +0000 (0:00:00.154) 0:00:23.628 ********** 2026-04-10 00:59:27.974370 | orchestrator | changed: [testbed-node-0] 2026-04-10 00:59:27.974376 | orchestrator | 2026-04-10 00:59:27.974382 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-10 00:59:27.974388 | orchestrator | 2026-04-10 00:59:27.974394 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-10 00:59:27.974400 | orchestrator | Friday 10 April 2026 00:59:08 +0000 (0:00:11.905) 0:00:35.534 ********** 2026-04-10 00:59:27.974407 | orchestrator | changed: [testbed-node-1] 2026-04-10 00:59:27.974413 | orchestrator | 2026-04-10 00:59:27.974419 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-10 00:59:27.974425 | orchestrator | 2026-04-10 00:59:27.974431 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-10 00:59:27.974438 | orchestrator | Friday 10 April 2026 00:59:19 +0000 (0:00:11.560) 0:00:47.094 ********** 2026-04-10 00:59:27.974444 | orchestrator | changed: [testbed-node-2] 2026-04-10 00:59:27.974450 | orchestrator | 2026-04-10 00:59:27.974457 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:59:27.974464 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-10 00:59:27.974472 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:59:27.974479 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:59:27.974491 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:59:27.974497 | orchestrator | 2026-04-10 00:59:27.974503 | orchestrator | 2026-04-10 00:59:27.974510 | orchestrator | 2026-04-10 00:59:27.974516 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:59:27.974522 | orchestrator | Friday 10 April 2026 00:59:21 +0000 (0:00:01.675) 0:00:48.770 ********** 2026-04-10 00:59:27.974528 | orchestrator | =============================================================================== 2026-04-10 00:59:27.974535 | orchestrator | Restart ceph manager service ------------------------------------------- 25.14s 2026-04-10 00:59:27.974552 | orchestrator | Create admin user ------------------------------------------------------ 12.91s 2026-04-10 00:59:27.974559 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.11s 2026-04-10 00:59:27.974565 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.66s 2026-04-10 00:59:27.974571 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.17s 2026-04-10 00:59:27.974578 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.15s 2026-04-10 00:59:27.974583 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.13s 2026-04-10 00:59:27.974589 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.08s 2026-04-10 00:59:27.974596 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.06s 2026-04-10 00:59:27.974602 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.96s 2026-04-10 00:59:27.974608 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.15s 2026-04-10 00:59:27.974614 | orchestrator | 2026-04-10 00:59:27.974620 | orchestrator | 2026-04-10 00:59:27.974625 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-10 00:59:27.974632 | orchestrator | 2026-04-10 00:59:27.974638 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-10 00:59:27.974644 | orchestrator | Friday 10 April 2026 00:58:38 +0000 (0:00:00.355) 0:00:00.355 ********** 2026-04-10 00:59:27.974650 | orchestrator | ok: [testbed-node-0] 2026-04-10 00:59:27.974657 | orchestrator | ok: [testbed-node-1] 2026-04-10 00:59:27.974663 | orchestrator | ok: [testbed-node-2] 2026-04-10 00:59:27.974669 | orchestrator | ok: [testbed-node-3] 2026-04-10 00:59:27.974674 | orchestrator | ok: [testbed-node-4] 2026-04-10 00:59:27.974680 | orchestrator | ok: [testbed-node-5] 2026-04-10 00:59:27.974687 | orchestrator | ok: [testbed-manager] 2026-04-10 00:59:27.974693 | orchestrator | 2026-04-10 00:59:27.974698 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-10 00:59:27.974705 | orchestrator | Friday 10 April 2026 00:58:39 +0000 (0:00:01.004) 0:00:01.359 ********** 2026-04-10 00:59:27.974711 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-04-10 00:59:27.974718 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-04-10 00:59:27.974724 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-04-10 00:59:27.974730 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-04-10 00:59:27.974736 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-04-10 00:59:27.974743 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-04-10 00:59:27.974749 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-04-10 00:59:27.974755 | orchestrator | 2026-04-10 00:59:27.974761 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-10 00:59:27.974767 | orchestrator | 2026-04-10 00:59:27.974773 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-04-10 00:59:27.974779 | orchestrator | Friday 10 April 2026 00:58:40 +0000 (0:00:01.200) 0:00:02.560 ********** 2026-04-10 00:59:27.974786 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-04-10 00:59:27.974798 | orchestrator | 2026-04-10 00:59:27.974805 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-04-10 00:59:27.974815 | orchestrator | Friday 10 April 2026 00:58:42 +0000 (0:00:01.910) 0:00:04.470 ********** 2026-04-10 00:59:27.974821 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2026-04-10 00:59:27.974827 | orchestrator | 2026-04-10 00:59:27.974833 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-04-10 00:59:27.974839 | orchestrator | Friday 10 April 2026 00:58:57 +0000 (0:00:14.577) 0:00:19.048 ********** 2026-04-10 00:59:27.974847 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-04-10 00:59:27.974854 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-04-10 00:59:27.974861 | orchestrator | 2026-04-10 00:59:27.974867 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-04-10 00:59:27.974873 | orchestrator | Friday 10 April 2026 00:59:04 +0000 (0:00:07.819) 0:00:26.868 ********** 2026-04-10 00:59:27.974880 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-10 00:59:27.974886 | orchestrator | 2026-04-10 00:59:27.974892 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-04-10 00:59:27.974899 | orchestrator | Friday 10 April 2026 00:59:09 +0000 (0:00:04.540) 0:00:31.408 ********** 2026-04-10 00:59:27.974905 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2026-04-10 00:59:27.974912 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-10 00:59:27.974918 | orchestrator | 2026-04-10 00:59:27.974924 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-04-10 00:59:27.974953 | orchestrator | Friday 10 April 2026 00:59:13 +0000 (0:00:04.031) 0:00:35.439 ********** 2026-04-10 00:59:27.974959 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-10 00:59:27.974965 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2026-04-10 00:59:27.974971 | orchestrator | 2026-04-10 00:59:27.974978 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-04-10 00:59:27.974984 | orchestrator | Friday 10 April 2026 00:59:20 +0000 (0:00:06.774) 0:00:42.214 ********** 2026-04-10 00:59:27.974990 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2026-04-10 00:59:27.974997 | orchestrator | 2026-04-10 00:59:27.975004 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 00:59:27.975016 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:59:27.975023 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:59:27.975030 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:59:27.975037 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:59:27.975043 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:59:27.975049 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:59:27.975055 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 00:59:27.975062 | orchestrator | 2026-04-10 00:59:27.975068 | orchestrator | 2026-04-10 00:59:27.975074 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 00:59:27.975080 | orchestrator | Friday 10 April 2026 00:59:25 +0000 (0:00:05.528) 0:00:47.743 ********** 2026-04-10 00:59:27.975091 | orchestrator | =============================================================================== 2026-04-10 00:59:27.975098 | orchestrator | service-ks-register : ceph-rgw | Creating services --------------------- 14.58s 2026-04-10 00:59:27.975104 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 7.82s 2026-04-10 00:59:27.975110 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.78s 2026-04-10 00:59:27.975116 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.53s 2026-04-10 00:59:27.975121 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 4.54s 2026-04-10 00:59:27.975127 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.03s 2026-04-10 00:59:27.975133 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.91s 2026-04-10 00:59:27.975139 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.20s 2026-04-10 00:59:27.975146 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.00s 2026-04-10 00:59:27.975153 | orchestrator | 2026-04-10 00:59:27 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 00:59:27.975159 | orchestrator | 2026-04-10 00:59:27 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 00:59:27.975367 | orchestrator | 2026-04-10 00:59:27 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 00:59:27.975991 | orchestrator | 2026-04-10 00:59:27 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 00:59:27.976069 | orchestrator | 2026-04-10 00:59:27 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:59:31.009241 | orchestrator | 2026-04-10 00:59:31 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 00:59:31.009784 | orchestrator | 2026-04-10 00:59:31 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 00:59:31.010923 | orchestrator | 2026-04-10 00:59:31 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 00:59:31.011764 | orchestrator | 2026-04-10 00:59:31 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 00:59:31.011957 | orchestrator | 2026-04-10 00:59:31 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:59:34.044308 | orchestrator | 2026-04-10 00:59:34 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 00:59:34.044962 | orchestrator | 2026-04-10 00:59:34 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 00:59:34.047530 | orchestrator | 2026-04-10 00:59:34 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 00:59:34.048569 | orchestrator | 2026-04-10 00:59:34 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 00:59:34.048668 | orchestrator | 2026-04-10 00:59:34 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:59:37.080275 | orchestrator | 2026-04-10 00:59:37 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 00:59:37.080456 | orchestrator | 2026-04-10 00:59:37 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 00:59:37.081489 | orchestrator | 2026-04-10 00:59:37 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 00:59:37.082327 | orchestrator | 2026-04-10 00:59:37 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 00:59:37.082378 | orchestrator | 2026-04-10 00:59:37 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:59:40.112033 | orchestrator | 2026-04-10 00:59:40 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 00:59:40.112586 | orchestrator | 2026-04-10 00:59:40 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 00:59:40.113308 | orchestrator | 2026-04-10 00:59:40 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 00:59:40.114122 | orchestrator | 2026-04-10 00:59:40 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 00:59:40.114181 | orchestrator | 2026-04-10 00:59:40 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:59:43.153245 | orchestrator | 2026-04-10 00:59:43 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 00:59:43.153290 | orchestrator | 2026-04-10 00:59:43 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 00:59:43.153298 | orchestrator | 2026-04-10 00:59:43 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 00:59:43.153305 | orchestrator | 2026-04-10 00:59:43 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 00:59:43.153312 | orchestrator | 2026-04-10 00:59:43 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:59:46.179294 | orchestrator | 2026-04-10 00:59:46 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 00:59:46.180033 | orchestrator | 2026-04-10 00:59:46 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 00:59:46.180691 | orchestrator | 2026-04-10 00:59:46 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 00:59:46.182603 | orchestrator | 2026-04-10 00:59:46 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 00:59:46.183796 | orchestrator | 2026-04-10 00:59:46 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:59:49.213161 | orchestrator | 2026-04-10 00:59:49 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 00:59:49.215421 | orchestrator | 2026-04-10 00:59:49 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 00:59:49.216055 | orchestrator | 2026-04-10 00:59:49 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 00:59:49.216976 | orchestrator | 2026-04-10 00:59:49 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 00:59:49.217037 | orchestrator | 2026-04-10 00:59:49 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:59:52.273362 | orchestrator | 2026-04-10 00:59:52 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 00:59:52.276713 | orchestrator | 2026-04-10 00:59:52 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 00:59:52.277040 | orchestrator | 2026-04-10 00:59:52 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 00:59:52.277737 | orchestrator | 2026-04-10 00:59:52 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 00:59:52.277770 | orchestrator | 2026-04-10 00:59:52 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:59:55.306010 | orchestrator | 2026-04-10 00:59:55 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 00:59:55.306863 | orchestrator | 2026-04-10 00:59:55 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 00:59:55.308482 | orchestrator | 2026-04-10 00:59:55 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 00:59:55.309470 | orchestrator | 2026-04-10 00:59:55 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 00:59:55.309551 | orchestrator | 2026-04-10 00:59:55 | INFO  | Wait 1 second(s) until the next check 2026-04-10 00:59:58.341110 | orchestrator | 2026-04-10 00:59:58 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 00:59:58.341506 | orchestrator | 2026-04-10 00:59:58 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 00:59:58.343774 | orchestrator | 2026-04-10 00:59:58 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 00:59:58.346164 | orchestrator | 2026-04-10 00:59:58 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 00:59:58.346222 | orchestrator | 2026-04-10 00:59:58 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:00:01.372683 | orchestrator | 2026-04-10 01:00:01 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:00:01.372988 | orchestrator | 2026-04-10 01:00:01 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 01:00:01.373613 | orchestrator | 2026-04-10 01:00:01 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:00:01.374449 | orchestrator | 2026-04-10 01:00:01 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 01:00:01.374502 | orchestrator | 2026-04-10 01:00:01 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:00:04.407954 | orchestrator | 2026-04-10 01:00:04 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:00:04.409060 | orchestrator | 2026-04-10 01:00:04 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 01:00:04.410577 | orchestrator | 2026-04-10 01:00:04 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:00:04.412427 | orchestrator | 2026-04-10 01:00:04 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 01:00:04.412493 | orchestrator | 2026-04-10 01:00:04 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:00:07.440488 | orchestrator | 2026-04-10 01:00:07 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:00:07.440565 | orchestrator | 2026-04-10 01:00:07 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 01:00:07.440816 | orchestrator | 2026-04-10 01:00:07 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:00:07.440836 | orchestrator | 2026-04-10 01:00:07 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 01:00:07.440844 | orchestrator | 2026-04-10 01:00:07 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:00:10.502375 | orchestrator | 2026-04-10 01:00:10 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:00:10.502451 | orchestrator | 2026-04-10 01:00:10 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 01:00:10.502464 | orchestrator | 2026-04-10 01:00:10 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:00:10.502474 | orchestrator | 2026-04-10 01:00:10 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 01:00:10.502484 | orchestrator | 2026-04-10 01:00:10 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:00:13.530118 | orchestrator | 2026-04-10 01:00:13 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:00:13.531849 | orchestrator | 2026-04-10 01:00:13 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 01:00:13.533430 | orchestrator | 2026-04-10 01:00:13 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:00:13.534247 | orchestrator | 2026-04-10 01:00:13 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 01:00:13.534303 | orchestrator | 2026-04-10 01:00:13 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:00:16.564365 | orchestrator | 2026-04-10 01:00:16 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:00:16.564859 | orchestrator | 2026-04-10 01:00:16 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 01:00:16.565701 | orchestrator | 2026-04-10 01:00:16 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:00:16.566522 | orchestrator | 2026-04-10 01:00:16 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 01:00:16.566553 | orchestrator | 2026-04-10 01:00:16 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:00:19.593249 | orchestrator | 2026-04-10 01:00:19 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:00:19.593360 | orchestrator | 2026-04-10 01:00:19 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 01:00:19.594128 | orchestrator | 2026-04-10 01:00:19 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:00:19.594580 | orchestrator | 2026-04-10 01:00:19 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 01:00:19.594653 | orchestrator | 2026-04-10 01:00:19 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:00:22.633974 | orchestrator | 2026-04-10 01:00:22 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:00:22.637785 | orchestrator | 2026-04-10 01:00:22 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 01:00:22.639442 | orchestrator | 2026-04-10 01:00:22 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:00:22.642593 | orchestrator | 2026-04-10 01:00:22 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 01:00:22.642623 | orchestrator | 2026-04-10 01:00:22 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:00:25.693222 | orchestrator | 2026-04-10 01:00:25 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:00:25.694537 | orchestrator | 2026-04-10 01:00:25 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 01:00:25.695727 | orchestrator | 2026-04-10 01:00:25 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:00:25.697125 | orchestrator | 2026-04-10 01:00:25 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 01:00:25.697165 | orchestrator | 2026-04-10 01:00:25 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:00:28.832180 | orchestrator | 2026-04-10 01:00:28 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:00:28.832278 | orchestrator | 2026-04-10 01:00:28 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 01:00:28.832297 | orchestrator | 2026-04-10 01:00:28 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:00:28.832312 | orchestrator | 2026-04-10 01:00:28 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 01:00:28.832327 | orchestrator | 2026-04-10 01:00:28 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:00:31.758069 | orchestrator | 2026-04-10 01:00:31 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:00:31.759528 | orchestrator | 2026-04-10 01:00:31 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 01:00:31.762180 | orchestrator | 2026-04-10 01:00:31 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:00:31.762576 | orchestrator | 2026-04-10 01:00:31 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 01:00:31.762654 | orchestrator | 2026-04-10 01:00:31 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:00:34.787613 | orchestrator | 2026-04-10 01:00:34 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:00:34.788717 | orchestrator | 2026-04-10 01:00:34 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 01:00:34.789300 | orchestrator | 2026-04-10 01:00:34 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:00:34.789813 | orchestrator | 2026-04-10 01:00:34 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 01:00:34.789971 | orchestrator | 2026-04-10 01:00:34 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:00:37.817327 | orchestrator | 2026-04-10 01:00:37 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:00:37.817673 | orchestrator | 2026-04-10 01:00:37 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 01:00:37.818516 | orchestrator | 2026-04-10 01:00:37 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:00:37.819829 | orchestrator | 2026-04-10 01:00:37 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 01:00:37.819926 | orchestrator | 2026-04-10 01:00:37 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:00:40.856739 | orchestrator | 2026-04-10 01:00:40 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:00:40.859002 | orchestrator | 2026-04-10 01:00:40 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 01:00:40.860628 | orchestrator | 2026-04-10 01:00:40 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:00:40.861657 | orchestrator | 2026-04-10 01:00:40 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 01:00:40.861699 | orchestrator | 2026-04-10 01:00:40 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:00:43.899248 | orchestrator | 2026-04-10 01:00:43 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:00:43.903907 | orchestrator | 2026-04-10 01:00:43 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 01:00:43.905733 | orchestrator | 2026-04-10 01:00:43 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:00:43.906448 | orchestrator | 2026-04-10 01:00:43 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 01:00:43.906527 | orchestrator | 2026-04-10 01:00:43 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:00:46.946183 | orchestrator | 2026-04-10 01:00:46 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:00:46.949174 | orchestrator | 2026-04-10 01:00:46 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 01:00:46.951273 | orchestrator | 2026-04-10 01:00:46 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:00:46.952769 | orchestrator | 2026-04-10 01:00:46 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 01:00:46.952829 | orchestrator | 2026-04-10 01:00:46 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:00:50.008524 | orchestrator | 2026-04-10 01:00:50 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:00:50.009676 | orchestrator | 2026-04-10 01:00:50 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 01:00:50.011016 | orchestrator | 2026-04-10 01:00:50 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:00:50.014562 | orchestrator | 2026-04-10 01:00:50 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 01:00:50.014623 | orchestrator | 2026-04-10 01:00:50 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:00:53.054625 | orchestrator | 2026-04-10 01:00:53 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:00:53.055457 | orchestrator | 2026-04-10 01:00:53 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 01:00:53.056772 | orchestrator | 2026-04-10 01:00:53 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:00:53.058357 | orchestrator | 2026-04-10 01:00:53 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 01:00:53.058414 | orchestrator | 2026-04-10 01:00:53 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:00:56.119402 | orchestrator | 2026-04-10 01:00:56 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:00:56.121281 | orchestrator | 2026-04-10 01:00:56 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 01:00:56.123365 | orchestrator | 2026-04-10 01:00:56 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:00:56.125367 | orchestrator | 2026-04-10 01:00:56 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 01:00:56.125412 | orchestrator | 2026-04-10 01:00:56 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:00:59.164976 | orchestrator | 2026-04-10 01:00:59 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:00:59.165310 | orchestrator | 2026-04-10 01:00:59 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 01:00:59.167352 | orchestrator | 2026-04-10 01:00:59 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:00:59.169688 | orchestrator | 2026-04-10 01:00:59 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 01:00:59.169749 | orchestrator | 2026-04-10 01:00:59 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:01:02.204602 | orchestrator | 2026-04-10 01:01:02 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:01:02.204921 | orchestrator | 2026-04-10 01:01:02 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 01:01:02.205621 | orchestrator | 2026-04-10 01:01:02 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:01:02.206527 | orchestrator | 2026-04-10 01:01:02 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 01:01:02.206553 | orchestrator | 2026-04-10 01:01:02 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:01:05.248590 | orchestrator | 2026-04-10 01:01:05 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:01:05.250201 | orchestrator | 2026-04-10 01:01:05 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 01:01:05.252229 | orchestrator | 2026-04-10 01:01:05 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:01:05.253794 | orchestrator | 2026-04-10 01:01:05 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 01:01:05.253863 | orchestrator | 2026-04-10 01:01:05 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:01:08.295278 | orchestrator | 2026-04-10 01:01:08 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:01:08.296758 | orchestrator | 2026-04-10 01:01:08 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 01:01:08.298099 | orchestrator | 2026-04-10 01:01:08 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:01:08.299427 | orchestrator | 2026-04-10 01:01:08 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 01:01:08.299451 | orchestrator | 2026-04-10 01:01:08 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:01:11.337164 | orchestrator | 2026-04-10 01:01:11 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:01:11.337213 | orchestrator | 2026-04-10 01:01:11 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 01:01:11.340046 | orchestrator | 2026-04-10 01:01:11 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:01:11.340352 | orchestrator | 2026-04-10 01:01:11 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 01:01:11.340378 | orchestrator | 2026-04-10 01:01:11 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:01:14.404794 | orchestrator | 2026-04-10 01:01:14 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:01:14.405390 | orchestrator | 2026-04-10 01:01:14 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 01:01:14.407911 | orchestrator | 2026-04-10 01:01:14 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:01:14.408926 | orchestrator | 2026-04-10 01:01:14 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 01:01:14.408955 | orchestrator | 2026-04-10 01:01:14 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:01:17.449169 | orchestrator | 2026-04-10 01:01:17 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:01:17.450511 | orchestrator | 2026-04-10 01:01:17 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 01:01:17.451855 | orchestrator | 2026-04-10 01:01:17 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:01:17.452709 | orchestrator | 2026-04-10 01:01:17 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 01:01:17.453080 | orchestrator | 2026-04-10 01:01:17 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:01:20.507317 | orchestrator | 2026-04-10 01:01:20 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:01:20.508182 | orchestrator | 2026-04-10 01:01:20 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 01:01:20.509351 | orchestrator | 2026-04-10 01:01:20 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:01:20.510277 | orchestrator | 2026-04-10 01:01:20 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 01:01:20.510530 | orchestrator | 2026-04-10 01:01:20 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:01:23.544615 | orchestrator | 2026-04-10 01:01:23 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:01:23.546608 | orchestrator | 2026-04-10 01:01:23 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 01:01:23.548588 | orchestrator | 2026-04-10 01:01:23 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:01:23.550312 | orchestrator | 2026-04-10 01:01:23 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 01:01:23.550502 | orchestrator | 2026-04-10 01:01:23 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:01:26.586605 | orchestrator | 2026-04-10 01:01:26 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:01:26.587408 | orchestrator | 2026-04-10 01:01:26 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 01:01:26.589671 | orchestrator | 2026-04-10 01:01:26 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:01:26.590529 | orchestrator | 2026-04-10 01:01:26 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state STARTED 2026-04-10 01:01:26.590569 | orchestrator | 2026-04-10 01:01:26 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:01:29.617707 | orchestrator | 2026-04-10 01:01:29 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:01:29.618130 | orchestrator | 2026-04-10 01:01:29 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:01:29.619360 | orchestrator | 2026-04-10 01:01:29 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 01:01:29.620031 | orchestrator | 2026-04-10 01:01:29 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:01:29.622337 | orchestrator | 2026-04-10 01:01:29 | INFO  | Task 2b591f06-319e-4344-ae10-617a97ea99dc is in state SUCCESS 2026-04-10 01:01:29.623921 | orchestrator | 2026-04-10 01:01:29.623959 | orchestrator | 2026-04-10 01:01:29.623968 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-10 01:01:29.623976 | orchestrator | 2026-04-10 01:01:29.623984 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-10 01:01:29.623992 | orchestrator | Friday 10 April 2026 00:58:31 +0000 (0:00:00.318) 0:00:00.318 ********** 2026-04-10 01:01:29.623998 | orchestrator | ok: [testbed-manager] 2026-04-10 01:01:29.624006 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:01:29.624013 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:01:29.624019 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:01:29.624025 | orchestrator | ok: [testbed-node-3] 2026-04-10 01:01:29.624030 | orchestrator | ok: [testbed-node-4] 2026-04-10 01:01:29.624036 | orchestrator | ok: [testbed-node-5] 2026-04-10 01:01:29.624053 | orchestrator | 2026-04-10 01:01:29.624057 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-10 01:01:29.624061 | orchestrator | Friday 10 April 2026 00:58:32 +0000 (0:00:00.726) 0:00:01.045 ********** 2026-04-10 01:01:29.624065 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-04-10 01:01:29.624070 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-04-10 01:01:29.624081 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-04-10 01:01:29.624084 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-04-10 01:01:29.624088 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-04-10 01:01:29.624092 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-04-10 01:01:29.624096 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-04-10 01:01:29.624100 | orchestrator | 2026-04-10 01:01:29.624104 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-04-10 01:01:29.624108 | orchestrator | 2026-04-10 01:01:29.624111 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-10 01:01:29.624137 | orchestrator | Friday 10 April 2026 00:58:33 +0000 (0:00:00.916) 0:00:01.961 ********** 2026-04-10 01:01:29.624152 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 01:01:29.624158 | orchestrator | 2026-04-10 01:01:29.624162 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-04-10 01:01:29.624166 | orchestrator | Friday 10 April 2026 00:58:34 +0000 (0:00:01.225) 0:00:03.186 ********** 2026-04-10 01:01:29.624172 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-10 01:01:29.624180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-10 01:01:29.624185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-10 01:01:29.624189 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-10 01:01:29.624252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-10 01:01:29.624259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.624264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.624278 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.624283 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-10 01:01:29.624287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.624292 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-10 01:01:29.624302 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-10 01:01:29.624307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.624318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.624328 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-10 01:01:29.624637 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.624662 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.624668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.624672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.624683 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.624687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.624704 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.624710 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.624714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.624718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.624722 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.624727 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.624735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.624744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.624748 | orchestrator | 2026-04-10 01:01:29.624753 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-10 01:01:29.624757 | orchestrator | Friday 10 April 2026 00:58:39 +0000 (0:00:04.382) 0:00:07.569 ********** 2026-04-10 01:01:29.624761 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 01:01:29.624766 | orchestrator | 2026-04-10 01:01:29.624769 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-04-10 01:01:29.624775 | orchestrator | Friday 10 April 2026 00:58:41 +0000 (0:00:01.884) 0:00:09.454 ********** 2026-04-10 01:01:29.624780 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-10 01:01:29.624822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-10 01:01:29.624826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-10 01:01:29.624830 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-10 01:01:29.624838 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-10 01:01:29.624848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-10 01:01:29.624852 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-10 01:01:29.624859 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-10 01:01:29.624864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.624868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.624872 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.624876 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.624889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.624893 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.624897 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.624909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.624913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.624917 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.624921 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.624925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.624936 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-10 01:01:29.624997 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.625005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.625010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.625013 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.625131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.625139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.625156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.625162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.625169 | orchestrator | 2026-04-10 01:01:29.625174 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-04-10 01:01:29.625286 | orchestrator | Friday 10 April 2026 00:58:46 +0000 (0:00:05.600) 0:00:15.054 ********** 2026-04-10 01:01:29.625303 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-10 01:01:29.625311 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-10 01:01:29.625317 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-10 01:01:29.625326 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-10 01:01:29.625346 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 01:01:29.625353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-10 01:01:29.625357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 01:01:29.625364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 01:01:29.625368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-10 01:01:29.625372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 01:01:29.625376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-10 01:01:29.625384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 01:01:29.625393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 01:01:29.625399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-10 01:01:29.625406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 01:01:29.625412 | orchestrator | skipping: [testbed-manager] 2026-04-10 01:01:29.625418 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:01:29.625425 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:01:29.625435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-10 01:01:29.625442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 01:01:29.625449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 01:01:29.625460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-10 01:01:29.625466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 01:01:29.625474 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:01:29.625482 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-10 01:01:29.625494 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-10 01:01:29.625498 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-10 01:01:29.625502 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:01:29.625679 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-10 01:01:29.625695 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-10 01:01:29.625705 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-10 01:01:29.625709 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:01:29.625714 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-10 01:01:29.625718 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-10 01:01:29.625737 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-10 01:01:29.625741 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:01:29.625746 | orchestrator | 2026-04-10 01:01:29.625750 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-04-10 01:01:29.625754 | orchestrator | Friday 10 April 2026 00:58:48 +0000 (0:00:01.409) 0:00:16.463 ********** 2026-04-10 01:01:29.625758 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-10 01:01:29.625767 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-10 01:01:29.625771 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-10 01:01:29.625779 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-10 01:01:29.625803 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 01:01:29.625824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-10 01:01:29.625832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 01:01:29.625838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 01:01:29.625848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-10 01:01:29.625863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 01:01:29.625870 | orchestrator | skipping: [testbed-manager] 2026-04-10 01:01:29.625877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-10 01:01:29.625883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 01:01:29.625889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 01:01:29.626136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-10 01:01:29.626142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 01:01:29.626152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-10 01:01:29.626157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 01:01:29.626167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 01:01:29.626171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-10 01:01:29.626175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-10 01:01:29.626179 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:01:29.626183 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:01:29.626187 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:01:29.626237 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-10 01:01:29.626246 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-10 01:01:29.626253 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-10 01:01:29.626258 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:01:29.626369 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-10 01:01:29.626389 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-10 01:01:29.626395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-10 01:01:29.626401 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:01:29.626408 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-10 01:01:29.626414 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-10 01:01:29.626446 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-10 01:01:29.626454 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:01:29.626461 | orchestrator | 2026-04-10 01:01:29.626468 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-04-10 01:01:29.626474 | orchestrator | Friday 10 April 2026 00:58:50 +0000 (0:00:01.896) 0:00:18.360 ********** 2026-04-10 01:01:29.626483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-10 01:01:29.626495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-10 01:01:29.626520 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-10 01:01:29.626527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-10 01:01:29.626533 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-10 01:01:29.626539 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-10 01:01:29.626566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.626576 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-10 01:01:29.626582 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-10 01:01:29.626599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.626605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.626611 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.626618 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.626623 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.626650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.626658 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.626671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.626683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.626690 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.626697 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.626703 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.626710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.626738 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-10 01:01:29.626755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.626765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.626771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.626778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.626872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.626884 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.626891 | orchestrator | 2026-04-10 01:01:29.626897 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-04-10 01:01:29.626904 | orchestrator | Friday 10 April 2026 00:58:55 +0000 (0:00:05.723) 0:00:24.084 ********** 2026-04-10 01:01:29.626910 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-10 01:01:29.626917 | orchestrator | 2026-04-10 01:01:29.626923 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-04-10 01:01:29.626968 | orchestrator | Friday 10 April 2026 00:58:56 +0000 (0:00:01.039) 0:00:25.123 ********** 2026-04-10 01:01:29.626977 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1108608, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3640344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.626985 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1108608, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3640344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.626997 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1108608, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3640344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627005 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1108608, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3640344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627011 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1108627, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3690813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627019 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1108627, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3690813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627042 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1108608, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3640344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627059 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1108627, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3690813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627066 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1108598, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.358611, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627077 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1108608, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3640344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-10 01:01:29.627084 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1108627, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3690813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627091 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1108608, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3640344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627097 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1108598, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.358611, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627122 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1108598, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.358611, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627133 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1108619, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.367151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627140 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1108627, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3690813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627151 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1108619, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.367151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627158 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1108627, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3690813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627163 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1108595, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3580666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627167 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1108598, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.358611, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627189 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1108595, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3580666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627193 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1108598, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.358611, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627199 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1108619, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.367151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627209 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1108619, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.367151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627216 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1108612, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3640811, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627223 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1108612, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3640811, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627229 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1108598, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.358611, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627241 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1108627, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3690813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-10 01:01:29.627265 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1108619, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.367151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627273 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1108618, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3668299, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627285 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1108595, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3580666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627291 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1108595, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3580666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627295 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1108618, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3668299, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627299 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1108615, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.365081, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627309 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1108595, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3580666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627327 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1108612, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3640811, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627332 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1108619, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.367151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627340 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1108618, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3668299, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627344 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1108612, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3640811, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627349 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1108615, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.365081, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627357 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1108601, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.359081, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627362 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1108612, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3640811, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627378 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1108615, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.365081, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627383 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1108595, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3580666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627390 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1108618, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3668299, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627394 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1108601, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.359081, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627398 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1108626, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3690813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627406 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1108601, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.359081, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627410 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1108618, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3668299, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627430 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1108615, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.365081, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627435 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1108592, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3569908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627442 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1108626, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3690813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627447 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1108612, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3640811, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627451 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1108598, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.358611, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-10 01:01:29.627458 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1108601, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.359081, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627462 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1108626, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3690813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627481 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1108615, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.365081, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627487 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1108631, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3721423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627496 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1108618, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3668299, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627501 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1108601, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.359081, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627505 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1108626, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3690813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627513 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1108592, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3569908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627517 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1108592, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3569908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627538 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1108615, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.365081, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627542 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1108631, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3721423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627550 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1108592, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3569908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627554 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1108626, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3690813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627562 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1108623, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3679402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627566 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1108631, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3721423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627570 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1108619, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.367151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-10 01:01:29.627590 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1108601, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.359081, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627595 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1108623, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3679402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627602 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1108631, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3721423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627606 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1108623, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3679402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627614 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1108592, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3569908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627618 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1108597, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3584137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627622 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1108597, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3584137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627630 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1108626, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3690813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627635 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1108595, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3580666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-10 01:01:29.627641 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1108623, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3679402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627646 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1108597, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3584137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627655 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1108631, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3721423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627659 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1108592, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3569908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627663 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1108593, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3575017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627674 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1108593, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3575017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627679 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1108593, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3575017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627686 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1108623, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3679402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627694 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1108631, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3721423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627698 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1108617, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3664827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627702 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1108597, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3584137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627706 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1108617, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3664827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627714 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1108617, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3664827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627719 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1108593, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3575017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627726 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1108616, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3663263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627734 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1108623, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3679402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627738 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1108616, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3663263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627742 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1108617, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3664827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627747 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1108597, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3584137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627755 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1108597, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3584137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627759 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1108629, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3715203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627763 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:01:29.627770 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1108616, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3663263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627778 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1108612, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3640811, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-10 01:01:29.627806 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1108616, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3663263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627812 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1108593, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3575017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627817 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1108629, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3715203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627820 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:01:29.627828 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1108629, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3715203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627832 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:01:29.627836 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1108593, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3575017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627847 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1108629, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3715203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627852 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:01:29.627856 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1108617, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3664827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627860 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1108617, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3664827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627864 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1108616, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3663263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627868 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1108616, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3663263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627876 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1108618, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3668299, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-10 01:01:29.627880 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1108629, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3715203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627887 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:01:29.627894 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1108629, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3715203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-10 01:01:29.627899 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:01:29.627903 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1108615, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.365081, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-10 01:01:29.627907 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1108601, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.359081, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-10 01:01:29.627911 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1108626, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3690813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-10 01:01:29.627915 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1108592, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3569908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-10 01:01:29.627922 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1108631, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3721423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-10 01:01:29.627929 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1108623, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3679402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-10 01:01:29.627937 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1108597, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3584137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-10 01:01:29.627941 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1108593, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3575017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-10 01:01:29.627945 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1108617, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3664827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-10 01:01:29.627950 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1108616, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3663263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-10 01:01:29.627954 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1108629, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3715203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-10 01:01:29.627958 | orchestrator | 2026-04-10 01:01:29.627962 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-04-10 01:01:29.627966 | orchestrator | Friday 10 April 2026 00:59:21 +0000 (0:00:24.824) 0:00:49.948 ********** 2026-04-10 01:01:29.627970 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-10 01:01:29.627974 | orchestrator | 2026-04-10 01:01:29.627981 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-04-10 01:01:29.627988 | orchestrator | Friday 10 April 2026 00:59:22 +0000 (0:00:01.008) 0:00:50.956 ********** 2026-04-10 01:01:29.627992 | orchestrator | [WARNING]: Skipped 2026-04-10 01:01:29.627997 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-10 01:01:29.628001 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-04-10 01:01:29.628005 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-10 01:01:29.628009 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-04-10 01:01:29.628013 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-10 01:01:29.628017 | orchestrator | [WARNING]: Skipped 2026-04-10 01:01:29.628021 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-10 01:01:29.628025 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-04-10 01:01:29.628029 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-10 01:01:29.628033 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-04-10 01:01:29.628037 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-10 01:01:29.628041 | orchestrator | [WARNING]: Skipped 2026-04-10 01:01:29.628045 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-10 01:01:29.628049 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-04-10 01:01:29.628053 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-10 01:01:29.628057 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-04-10 01:01:29.628061 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-10 01:01:29.628065 | orchestrator | [WARNING]: Skipped 2026-04-10 01:01:29.628069 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-10 01:01:29.628073 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-04-10 01:01:29.628080 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-10 01:01:29.628085 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-04-10 01:01:29.628089 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-10 01:01:29.628092 | orchestrator | [WARNING]: Skipped 2026-04-10 01:01:29.628096 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-10 01:01:29.628100 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-04-10 01:01:29.628105 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-10 01:01:29.628109 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-04-10 01:01:29.628113 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-10 01:01:29.628117 | orchestrator | [WARNING]: Skipped 2026-04-10 01:01:29.628121 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-10 01:01:29.628125 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-04-10 01:01:29.628129 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-10 01:01:29.628133 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-04-10 01:01:29.628137 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-10 01:01:29.628141 | orchestrator | [WARNING]: Skipped 2026-04-10 01:01:29.628145 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-10 01:01:29.628149 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-04-10 01:01:29.628153 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-10 01:01:29.628157 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-04-10 01:01:29.628161 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-10 01:01:29.628165 | orchestrator | 2026-04-10 01:01:29.628170 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-04-10 01:01:29.628178 | orchestrator | Friday 10 April 2026 00:59:25 +0000 (0:00:02.888) 0:00:53.845 ********** 2026-04-10 01:01:29.628182 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-10 01:01:29.628186 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:01:29.628190 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-10 01:01:29.628194 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:01:29.628198 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-10 01:01:29.628202 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:01:29.628207 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-10 01:01:29.628211 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:01:29.628215 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-10 01:01:29.628219 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:01:29.628223 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-10 01:01:29.628227 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:01:29.628232 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-04-10 01:01:29.628236 | orchestrator | 2026-04-10 01:01:29.628241 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-04-10 01:01:29.628245 | orchestrator | Friday 10 April 2026 00:59:39 +0000 (0:00:13.554) 0:01:07.400 ********** 2026-04-10 01:01:29.628249 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-10 01:01:29.628257 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-10 01:01:29.628261 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:01:29.628265 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:01:29.628269 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-10 01:01:29.628273 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:01:29.628278 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-10 01:01:29.628281 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:01:29.628285 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-10 01:01:29.628290 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:01:29.628294 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-10 01:01:29.628298 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:01:29.628302 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-04-10 01:01:29.628306 | orchestrator | 2026-04-10 01:01:29.628310 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-04-10 01:01:29.628314 | orchestrator | Friday 10 April 2026 00:59:42 +0000 (0:00:03.340) 0:01:10.740 ********** 2026-04-10 01:01:29.628318 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-10 01:01:29.628323 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:01:29.628326 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-10 01:01:29.628330 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:01:29.628337 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-04-10 01:01:29.628342 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-10 01:01:29.628349 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:01:29.628353 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-10 01:01:29.628357 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:01:29.628361 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-10 01:01:29.628366 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:01:29.628370 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-10 01:01:29.628374 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:01:29.628378 | orchestrator | 2026-04-10 01:01:29.628382 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-04-10 01:01:29.628386 | orchestrator | Friday 10 April 2026 00:59:44 +0000 (0:00:01.918) 0:01:12.659 ********** 2026-04-10 01:01:29.628390 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-10 01:01:29.628394 | orchestrator | 2026-04-10 01:01:29.628398 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-04-10 01:01:29.628402 | orchestrator | Friday 10 April 2026 00:59:44 +0000 (0:00:00.613) 0:01:13.273 ********** 2026-04-10 01:01:29.628406 | orchestrator | skipping: [testbed-manager] 2026-04-10 01:01:29.628410 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:01:29.628414 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:01:29.628418 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:01:29.628422 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:01:29.628426 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:01:29.628430 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:01:29.628434 | orchestrator | 2026-04-10 01:01:29.628438 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-04-10 01:01:29.628441 | orchestrator | Friday 10 April 2026 00:59:45 +0000 (0:00:00.858) 0:01:14.131 ********** 2026-04-10 01:01:29.628445 | orchestrator | skipping: [testbed-manager] 2026-04-10 01:01:29.628449 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:01:29.628453 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:01:29.628457 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:01:29.628461 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:01:29.628465 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:01:29.628468 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:01:29.628472 | orchestrator | 2026-04-10 01:01:29.628477 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-04-10 01:01:29.628480 | orchestrator | Friday 10 April 2026 00:59:47 +0000 (0:00:02.185) 0:01:16.316 ********** 2026-04-10 01:01:29.628484 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-10 01:01:29.628488 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-10 01:01:29.628492 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:01:29.628496 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:01:29.628499 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-10 01:01:29.628503 | orchestrator | skipping: [testbed-manager] 2026-04-10 01:01:29.628507 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-10 01:01:29.628511 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:01:29.628518 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-10 01:01:29.628522 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:01:29.628525 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-10 01:01:29.628529 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:01:29.628533 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-10 01:01:29.628541 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:01:29.628545 | orchestrator | 2026-04-10 01:01:29.628549 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-04-10 01:01:29.628553 | orchestrator | Friday 10 April 2026 00:59:50 +0000 (0:00:02.049) 0:01:18.366 ********** 2026-04-10 01:01:29.628557 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-10 01:01:29.628561 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-10 01:01:29.628565 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-10 01:01:29.628569 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-04-10 01:01:29.628573 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:01:29.628577 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:01:29.628581 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:01:29.628585 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-10 01:01:29.628590 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:01:29.628593 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-10 01:01:29.628597 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:01:29.628602 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-10 01:01:29.628605 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:01:29.628609 | orchestrator | 2026-04-10 01:01:29.628613 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-04-10 01:01:29.628617 | orchestrator | Friday 10 April 2026 00:59:52 +0000 (0:00:02.054) 0:01:20.420 ********** 2026-04-10 01:01:29.628621 | orchestrator | [WARNING]: Skipped 2026-04-10 01:01:29.628625 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-04-10 01:01:29.628629 | orchestrator | due to this access issue: 2026-04-10 01:01:29.628656 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-04-10 01:01:29.628661 | orchestrator | not a directory 2026-04-10 01:01:29.628665 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-10 01:01:29.628669 | orchestrator | 2026-04-10 01:01:29.628673 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-04-10 01:01:29.628677 | orchestrator | Friday 10 April 2026 00:59:53 +0000 (0:00:01.689) 0:01:22.110 ********** 2026-04-10 01:01:29.628681 | orchestrator | skipping: [testbed-manager] 2026-04-10 01:01:29.628684 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:01:29.628688 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:01:29.628692 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:01:29.628696 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:01:29.628700 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:01:29.628704 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:01:29.628708 | orchestrator | 2026-04-10 01:01:29.628712 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-04-10 01:01:29.628715 | orchestrator | Friday 10 April 2026 00:59:54 +0000 (0:00:00.732) 0:01:22.843 ********** 2026-04-10 01:01:29.628719 | orchestrator | skipping: [testbed-manager] 2026-04-10 01:01:29.628723 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:01:29.628727 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:01:29.628732 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:01:29.628735 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:01:29.628739 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:01:29.628743 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:01:29.628748 | orchestrator | 2026-04-10 01:01:29.628755 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-04-10 01:01:29.628767 | orchestrator | Friday 10 April 2026 00:59:55 +0000 (0:00:01.218) 0:01:24.061 ********** 2026-04-10 01:01:29.628773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-10 01:01:29.628802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-10 01:01:29.628810 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-10 01:01:29.628817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-10 01:01:29.628828 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-10 01:01:29.628836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.628842 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-10 01:01:29.628853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.628860 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-10 01:01:29.628869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.628877 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.628882 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.628890 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-10 01:01:29.628897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.628903 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.628915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.628922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.628935 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-10 01:01:29.628944 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.628956 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.628963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.628969 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.628980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.628987 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.628997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.629001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.629005 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-10 01:01:29.629013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.629018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-10 01:01:29.629026 | orchestrator | 2026-04-10 01:01:29.629030 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-04-10 01:01:29.629034 | orchestrator | Friday 10 April 2026 01:00:00 +0000 (0:00:04.449) 0:01:28.510 ********** 2026-04-10 01:01:29.629038 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-10 01:01:29.629042 | orchestrator | skipping: [testbed-manager] 2026-04-10 01:01:29.629046 | orchestrator | 2026-04-10 01:01:29.629050 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-10 01:01:29.629054 | orchestrator | Friday 10 April 2026 01:00:01 +0000 (0:00:01.332) 0:01:29.842 ********** 2026-04-10 01:01:29.629058 | orchestrator | 2026-04-10 01:01:29.629062 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-10 01:01:29.629066 | orchestrator | Friday 10 April 2026 01:00:01 +0000 (0:00:00.097) 0:01:29.940 ********** 2026-04-10 01:01:29.629070 | orchestrator | 2026-04-10 01:01:29.629073 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-10 01:01:29.629078 | orchestrator | Friday 10 April 2026 01:00:01 +0000 (0:00:00.069) 0:01:30.010 ********** 2026-04-10 01:01:29.629082 | orchestrator | 2026-04-10 01:01:29.629086 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-10 01:01:29.629090 | orchestrator | Friday 10 April 2026 01:00:01 +0000 (0:00:00.060) 0:01:30.071 ********** 2026-04-10 01:01:29.629094 | orchestrator | 2026-04-10 01:01:29.629098 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-10 01:01:29.629102 | orchestrator | Friday 10 April 2026 01:00:01 +0000 (0:00:00.059) 0:01:30.130 ********** 2026-04-10 01:01:29.629106 | orchestrator | 2026-04-10 01:01:29.629110 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-10 01:01:29.629114 | orchestrator | Friday 10 April 2026 01:00:01 +0000 (0:00:00.059) 0:01:30.189 ********** 2026-04-10 01:01:29.629117 | orchestrator | 2026-04-10 01:01:29.629121 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-10 01:01:29.629125 | orchestrator | Friday 10 April 2026 01:00:01 +0000 (0:00:00.081) 0:01:30.270 ********** 2026-04-10 01:01:29.629129 | orchestrator | 2026-04-10 01:01:29.629133 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-04-10 01:01:29.629136 | orchestrator | Friday 10 April 2026 01:00:02 +0000 (0:00:00.139) 0:01:30.410 ********** 2026-04-10 01:01:29.629140 | orchestrator | changed: [testbed-manager] 2026-04-10 01:01:29.629145 | orchestrator | 2026-04-10 01:01:29.629149 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-04-10 01:01:29.629156 | orchestrator | Friday 10 April 2026 01:00:16 +0000 (0:00:14.361) 0:01:44.771 ********** 2026-04-10 01:01:29.629160 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:01:29.629164 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:01:29.629168 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:01:29.629172 | orchestrator | changed: [testbed-node-4] 2026-04-10 01:01:29.629176 | orchestrator | changed: [testbed-node-3] 2026-04-10 01:01:29.629180 | orchestrator | changed: [testbed-node-5] 2026-04-10 01:01:29.629184 | orchestrator | changed: [testbed-manager] 2026-04-10 01:01:29.629188 | orchestrator | 2026-04-10 01:01:29.629192 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-04-10 01:01:29.629195 | orchestrator | Friday 10 April 2026 01:00:29 +0000 (0:00:13.260) 0:01:58.031 ********** 2026-04-10 01:01:29.629199 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:01:29.629203 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:01:29.629207 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:01:29.629211 | orchestrator | 2026-04-10 01:01:29.629215 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-04-10 01:01:29.629219 | orchestrator | Friday 10 April 2026 01:00:35 +0000 (0:00:05.451) 0:02:03.483 ********** 2026-04-10 01:01:29.629228 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:01:29.629235 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:01:29.629241 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:01:29.629247 | orchestrator | 2026-04-10 01:01:29.629253 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-04-10 01:01:29.629260 | orchestrator | Friday 10 April 2026 01:00:41 +0000 (0:00:06.055) 0:02:09.538 ********** 2026-04-10 01:01:29.629266 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:01:29.629272 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:01:29.629277 | orchestrator | changed: [testbed-node-3] 2026-04-10 01:01:29.629284 | orchestrator | changed: [testbed-node-5] 2026-04-10 01:01:29.629290 | orchestrator | changed: [testbed-manager] 2026-04-10 01:01:29.629296 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:01:29.629302 | orchestrator | changed: [testbed-node-4] 2026-04-10 01:01:29.629309 | orchestrator | 2026-04-10 01:01:29.629316 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-04-10 01:01:29.629322 | orchestrator | Friday 10 April 2026 01:00:53 +0000 (0:00:12.644) 0:02:22.183 ********** 2026-04-10 01:01:29.629333 | orchestrator | changed: [testbed-manager] 2026-04-10 01:01:29.629339 | orchestrator | 2026-04-10 01:01:29.629346 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-04-10 01:01:29.629352 | orchestrator | Friday 10 April 2026 01:01:01 +0000 (0:00:07.585) 0:02:29.768 ********** 2026-04-10 01:01:29.629358 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:01:29.629364 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:01:29.629371 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:01:29.629377 | orchestrator | 2026-04-10 01:01:29.629383 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-04-10 01:01:29.629388 | orchestrator | Friday 10 April 2026 01:01:11 +0000 (0:00:09.797) 0:02:39.565 ********** 2026-04-10 01:01:29.629399 | orchestrator | changed: [testbed-manager] 2026-04-10 01:01:29.629408 | orchestrator | 2026-04-10 01:01:29.629418 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-04-10 01:01:29.629424 | orchestrator | Friday 10 April 2026 01:01:16 +0000 (0:00:05.460) 0:02:45.026 ********** 2026-04-10 01:01:29.629430 | orchestrator | changed: [testbed-node-5] 2026-04-10 01:01:29.629435 | orchestrator | changed: [testbed-node-4] 2026-04-10 01:01:29.629442 | orchestrator | changed: [testbed-node-3] 2026-04-10 01:01:29.629448 | orchestrator | 2026-04-10 01:01:29.629453 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 01:01:29.629458 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-10 01:01:29.629465 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-10 01:01:29.629472 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-10 01:01:29.629478 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-10 01:01:29.629485 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-10 01:01:29.629491 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-10 01:01:29.629497 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-10 01:01:29.629503 | orchestrator | 2026-04-10 01:01:29.629509 | orchestrator | 2026-04-10 01:01:29.629516 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 01:01:29.629526 | orchestrator | Friday 10 April 2026 01:01:26 +0000 (0:00:09.701) 0:02:54.728 ********** 2026-04-10 01:01:29.629531 | orchestrator | =============================================================================== 2026-04-10 01:01:29.629534 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 24.82s 2026-04-10 01:01:29.629538 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 14.36s 2026-04-10 01:01:29.629542 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 13.55s 2026-04-10 01:01:29.629546 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.26s 2026-04-10 01:01:29.629550 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 12.64s 2026-04-10 01:01:29.629559 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 9.80s 2026-04-10 01:01:29.629563 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 9.70s 2026-04-10 01:01:29.629567 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.59s 2026-04-10 01:01:29.629571 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 6.06s 2026-04-10 01:01:29.629575 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.72s 2026-04-10 01:01:29.629579 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.60s 2026-04-10 01:01:29.629583 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.46s 2026-04-10 01:01:29.629587 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.45s 2026-04-10 01:01:29.629591 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.45s 2026-04-10 01:01:29.629595 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.38s 2026-04-10 01:01:29.629598 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.34s 2026-04-10 01:01:29.629602 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.89s 2026-04-10 01:01:29.629606 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.18s 2026-04-10 01:01:29.629610 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 2.05s 2026-04-10 01:01:29.629614 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.05s 2026-04-10 01:01:29.629618 | orchestrator | 2026-04-10 01:01:29 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:01:32.646552 | orchestrator | 2026-04-10 01:01:32 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:01:32.648037 | orchestrator | 2026-04-10 01:01:32 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:01:32.648640 | orchestrator | 2026-04-10 01:01:32 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 01:01:32.649590 | orchestrator | 2026-04-10 01:01:32 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:01:32.649625 | orchestrator | 2026-04-10 01:01:32 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:01:35.675939 | orchestrator | 2026-04-10 01:01:35 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:01:35.676974 | orchestrator | 2026-04-10 01:01:35 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:01:35.677482 | orchestrator | 2026-04-10 01:01:35 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 01:01:35.678450 | orchestrator | 2026-04-10 01:01:35 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:01:35.678487 | orchestrator | 2026-04-10 01:01:35 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:01:38.718339 | orchestrator | 2026-04-10 01:01:38 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:01:38.721368 | orchestrator | 2026-04-10 01:01:38 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:01:38.723149 | orchestrator | 2026-04-10 01:01:38 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 01:01:38.724983 | orchestrator | 2026-04-10 01:01:38 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:01:38.725668 | orchestrator | 2026-04-10 01:01:38 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:01:41.774248 | orchestrator | 2026-04-10 01:01:41 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:01:41.776005 | orchestrator | 2026-04-10 01:01:41 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:01:41.777548 | orchestrator | 2026-04-10 01:01:41 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 01:01:41.779285 | orchestrator | 2026-04-10 01:01:41 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:01:41.779318 | orchestrator | 2026-04-10 01:01:41 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:01:44.830485 | orchestrator | 2026-04-10 01:01:44 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:01:44.833076 | orchestrator | 2026-04-10 01:01:44 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:01:44.834425 | orchestrator | 2026-04-10 01:01:44 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 01:01:44.835530 | orchestrator | 2026-04-10 01:01:44 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:01:44.835550 | orchestrator | 2026-04-10 01:01:44 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:01:47.883185 | orchestrator | 2026-04-10 01:01:47 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:01:47.883241 | orchestrator | 2026-04-10 01:01:47 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:01:47.885410 | orchestrator | 2026-04-10 01:01:47 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state STARTED 2026-04-10 01:01:47.885474 | orchestrator | 2026-04-10 01:01:47 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:01:47.885487 | orchestrator | 2026-04-10 01:01:47 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:01:50.939438 | orchestrator | 2026-04-10 01:01:50 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:01:50.940800 | orchestrator | 2026-04-10 01:01:50 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:01:50.942219 | orchestrator | 2026-04-10 01:01:50 | INFO  | Task b884ef43-1020-4195-b3cc-4fc6b103c029 is in state SUCCESS 2026-04-10 01:01:50.943446 | orchestrator | 2026-04-10 01:01:50.943472 | orchestrator | 2026-04-10 01:01:50.943479 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-10 01:01:50.943487 | orchestrator | 2026-04-10 01:01:50.943493 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-10 01:01:50.943500 | orchestrator | Friday 10 April 2026 00:58:38 +0000 (0:00:00.368) 0:00:00.368 ********** 2026-04-10 01:01:50.943506 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:01:50.943514 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:01:50.943521 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:01:50.943528 | orchestrator | 2026-04-10 01:01:50.943534 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-10 01:01:50.943551 | orchestrator | Friday 10 April 2026 00:58:39 +0000 (0:00:00.438) 0:00:00.807 ********** 2026-04-10 01:01:50.943574 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-04-10 01:01:50.943583 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-04-10 01:01:50.943589 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-04-10 01:01:50.943596 | orchestrator | 2026-04-10 01:01:50.943603 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-04-10 01:01:50.943610 | orchestrator | 2026-04-10 01:01:50.943617 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-10 01:01:50.943624 | orchestrator | Friday 10 April 2026 00:58:39 +0000 (0:00:00.444) 0:00:01.251 ********** 2026-04-10 01:01:50.943631 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 01:01:50.943638 | orchestrator | 2026-04-10 01:01:50.943645 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-04-10 01:01:50.943651 | orchestrator | Friday 10 April 2026 00:58:40 +0000 (0:00:01.037) 0:00:02.288 ********** 2026-04-10 01:01:50.943658 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-04-10 01:01:50.943678 | orchestrator | 2026-04-10 01:01:50.943685 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-04-10 01:01:50.943692 | orchestrator | Friday 10 April 2026 00:58:56 +0000 (0:00:16.229) 0:00:18.517 ********** 2026-04-10 01:01:50.943699 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-04-10 01:01:50.943707 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-04-10 01:01:50.943714 | orchestrator | 2026-04-10 01:01:50.943721 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-04-10 01:01:50.943728 | orchestrator | Friday 10 April 2026 00:59:04 +0000 (0:00:07.856) 0:00:26.374 ********** 2026-04-10 01:01:50.943735 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-04-10 01:01:50.943741 | orchestrator | 2026-04-10 01:01:50.943748 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-04-10 01:01:50.943755 | orchestrator | Friday 10 April 2026 00:59:09 +0000 (0:00:04.313) 0:00:30.687 ********** 2026-04-10 01:01:50.943776 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-04-10 01:01:50.943784 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-10 01:01:50.943790 | orchestrator | 2026-04-10 01:01:50.943797 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-04-10 01:01:50.943803 | orchestrator | Friday 10 April 2026 00:59:13 +0000 (0:00:04.194) 0:00:34.882 ********** 2026-04-10 01:01:50.943810 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-10 01:01:50.943817 | orchestrator | 2026-04-10 01:01:50.943824 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-04-10 01:01:50.943830 | orchestrator | Friday 10 April 2026 00:59:16 +0000 (0:00:03.690) 0:00:38.572 ********** 2026-04-10 01:01:50.943837 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-04-10 01:01:50.943844 | orchestrator | 2026-04-10 01:01:50.943851 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-04-10 01:01:50.943858 | orchestrator | Friday 10 April 2026 00:59:21 +0000 (0:00:04.493) 0:00:43.066 ********** 2026-04-10 01:01:50.943925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-10 01:01:50.943946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-10 01:01:50.943954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-10 01:01:50.943966 | orchestrator | 2026-04-10 01:01:50.943973 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-10 01:01:50.943980 | orchestrator | Friday 10 April 2026 00:59:26 +0000 (0:00:05.160) 0:00:48.226 ********** 2026-04-10 01:01:50.943987 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 01:01:50.944038 | orchestrator | 2026-04-10 01:01:50.944047 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-04-10 01:01:50.944058 | orchestrator | Friday 10 April 2026 00:59:27 +0000 (0:00:00.660) 0:00:48.887 ********** 2026-04-10 01:01:50.944065 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:01:50.944072 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:01:50.944079 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:01:50.944085 | orchestrator | 2026-04-10 01:01:50.944092 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-04-10 01:01:50.944098 | orchestrator | Friday 10 April 2026 00:59:30 +0000 (0:00:03.522) 0:00:52.409 ********** 2026-04-10 01:01:50.944106 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-10 01:01:50.944116 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-10 01:01:50.944123 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-10 01:01:50.944129 | orchestrator | 2026-04-10 01:01:50.944136 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-04-10 01:01:50.944142 | orchestrator | Friday 10 April 2026 00:59:32 +0000 (0:00:01.813) 0:00:54.223 ********** 2026-04-10 01:01:50.944149 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-10 01:01:50.944156 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-10 01:01:50.944163 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-10 01:01:50.944169 | orchestrator | 2026-04-10 01:01:50.944176 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-04-10 01:01:50.944182 | orchestrator | Friday 10 April 2026 00:59:33 +0000 (0:00:01.320) 0:00:55.544 ********** 2026-04-10 01:01:50.944189 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:01:50.944196 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:01:50.944202 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:01:50.944209 | orchestrator | 2026-04-10 01:01:50.944216 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-04-10 01:01:50.944222 | orchestrator | Friday 10 April 2026 00:59:34 +0000 (0:00:00.682) 0:00:56.226 ********** 2026-04-10 01:01:50.944229 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:01:50.944235 | orchestrator | 2026-04-10 01:01:50.944242 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-04-10 01:01:50.944249 | orchestrator | Friday 10 April 2026 00:59:34 +0000 (0:00:00.122) 0:00:56.349 ********** 2026-04-10 01:01:50.944256 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:01:50.944263 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:01:50.944269 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:01:50.944276 | orchestrator | 2026-04-10 01:01:50.944283 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-10 01:01:50.944290 | orchestrator | Friday 10 April 2026 00:59:35 +0000 (0:00:00.265) 0:00:56.614 ********** 2026-04-10 01:01:50.944297 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 01:01:50.944309 | orchestrator | 2026-04-10 01:01:50.944317 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-04-10 01:01:50.944324 | orchestrator | Friday 10 April 2026 00:59:35 +0000 (0:00:00.669) 0:00:57.284 ********** 2026-04-10 01:01:50.944331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-10 01:01:50.944347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-10 01:01:50.944355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-10 01:01:50.944366 | orchestrator | 2026-04-10 01:01:50.944373 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-04-10 01:01:50.944380 | orchestrator | Friday 10 April 2026 00:59:39 +0000 (0:00:03.347) 0:01:00.632 ********** 2026-04-10 01:01:50.944394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-10 01:01:50.944402 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:01:50.944409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-10 01:01:50.944420 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:01:50.944430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-10 01:01:50.944437 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:01:50.944444 | orchestrator | 2026-04-10 01:01:50.944454 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-04-10 01:01:50.944461 | orchestrator | Friday 10 April 2026 00:59:42 +0000 (0:00:03.644) 0:01:04.277 ********** 2026-04-10 01:01:50.944468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-10 01:01:50.944480 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:01:50.944487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-10 01:01:50.944495 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:01:50.944509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-10 01:01:50.944524 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:01:50.944530 | orchestrator | 2026-04-10 01:01:50.944537 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-04-10 01:01:50.944544 | orchestrator | Friday 10 April 2026 00:59:46 +0000 (0:00:04.256) 0:01:08.534 ********** 2026-04-10 01:01:50.944551 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:01:50.944558 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:01:50.944564 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:01:50.944571 | orchestrator | 2026-04-10 01:01:50.944578 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-04-10 01:01:50.944584 | orchestrator | Friday 10 April 2026 00:59:51 +0000 (0:00:04.984) 0:01:13.518 ********** 2026-04-10 01:01:50.944591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-10 01:01:50.944605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-10 01:01:50.944617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-10 01:01:50.944624 | orchestrator | 2026-04-10 01:01:50.944631 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-04-10 01:01:50.944637 | orchestrator | Friday 10 April 2026 00:59:56 +0000 (0:00:04.570) 0:01:18.089 ********** 2026-04-10 01:01:50.944644 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:01:50.944651 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:01:50.944658 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:01:50.944665 | orchestrator | 2026-04-10 01:01:50.944671 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-04-10 01:01:50.944678 | orchestrator | Friday 10 April 2026 01:00:03 +0000 (0:00:06.726) 0:01:24.815 ********** 2026-04-10 01:01:50.944685 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:01:50.944692 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:01:50.944699 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:01:50.944706 | orchestrator | 2026-04-10 01:01:50.944714 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-04-10 01:01:50.944721 | orchestrator | Friday 10 April 2026 01:00:06 +0000 (0:00:03.694) 0:01:28.509 ********** 2026-04-10 01:01:50.944729 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:01:50.944738 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:01:50.944745 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:01:50.944755 | orchestrator | 2026-04-10 01:01:50.944777 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-04-10 01:01:50.944785 | orchestrator | Friday 10 April 2026 01:00:10 +0000 (0:00:03.445) 0:01:31.955 ********** 2026-04-10 01:01:50.944793 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:01:50.944800 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:01:50.944813 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:01:50.944820 | orchestrator | 2026-04-10 01:01:50.944827 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-04-10 01:01:50.944834 | orchestrator | Friday 10 April 2026 01:00:15 +0000 (0:00:04.753) 0:01:36.708 ********** 2026-04-10 01:01:50.944841 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:01:50.944848 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:01:50.944856 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:01:50.944868 | orchestrator | 2026-04-10 01:01:50.944876 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-04-10 01:01:50.944883 | orchestrator | Friday 10 April 2026 01:00:20 +0000 (0:00:05.826) 0:01:42.535 ********** 2026-04-10 01:01:50.944894 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:01:50.944901 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:01:50.944909 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:01:50.944916 | orchestrator | 2026-04-10 01:01:50.944923 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-04-10 01:01:50.944931 | orchestrator | Friday 10 April 2026 01:00:22 +0000 (0:00:01.074) 0:01:43.610 ********** 2026-04-10 01:01:50.944938 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-10 01:01:50.944946 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:01:50.944953 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-10 01:01:50.944960 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:01:50.944968 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-10 01:01:50.944976 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:01:50.944983 | orchestrator | 2026-04-10 01:01:50.944990 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-04-10 01:01:50.944997 | orchestrator | Friday 10 April 2026 01:00:27 +0000 (0:00:05.326) 0:01:48.937 ********** 2026-04-10 01:01:50.945004 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:01:50.945011 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:01:50.945017 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:01:50.945024 | orchestrator | 2026-04-10 01:01:50.945030 | orchestrator | TASK [glance : Generating 'hostid' file for glance_api] ************************ 2026-04-10 01:01:50.945037 | orchestrator | Friday 10 April 2026 01:00:31 +0000 (0:00:04.466) 0:01:53.403 ********** 2026-04-10 01:01:50.945044 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:01:50.945051 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:01:50.945057 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:01:50.945064 | orchestrator | 2026-04-10 01:01:50.945070 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-04-10 01:01:50.945077 | orchestrator | Friday 10 April 2026 01:00:35 +0000 (0:00:03.466) 0:01:56.870 ********** 2026-04-10 01:01:50.945084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-10 01:01:50.945104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-10 01:01:50.945111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-10 01:01:50.945119 | orchestrator | 2026-04-10 01:01:50.945126 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-10 01:01:50.945133 | orchestrator | Friday 10 April 2026 01:00:39 +0000 (0:00:04.432) 0:02:01.302 ********** 2026-04-10 01:01:50.945139 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:01:50.945146 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:01:50.945153 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:01:50.945160 | orchestrator | 2026-04-10 01:01:50.945166 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-04-10 01:01:50.945173 | orchestrator | Friday 10 April 2026 01:00:39 +0000 (0:00:00.253) 0:02:01.557 ********** 2026-04-10 01:01:50.945184 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:01:50.945190 | orchestrator | 2026-04-10 01:01:50.945197 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-04-10 01:01:50.945204 | orchestrator | Friday 10 April 2026 01:00:42 +0000 (0:00:02.578) 0:02:04.135 ********** 2026-04-10 01:01:50.945211 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:01:50.945224 | orchestrator | 2026-04-10 01:01:50.945231 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-04-10 01:01:50.945238 | orchestrator | Friday 10 April 2026 01:00:45 +0000 (0:00:02.765) 0:02:06.900 ********** 2026-04-10 01:01:50.945245 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:01:50.945252 | orchestrator | 2026-04-10 01:01:50.945259 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-04-10 01:01:50.945266 | orchestrator | Friday 10 April 2026 01:00:47 +0000 (0:00:02.112) 0:02:09.013 ********** 2026-04-10 01:01:50.945272 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:01:50.945278 | orchestrator | 2026-04-10 01:01:50.945285 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-04-10 01:01:50.945291 | orchestrator | Friday 10 April 2026 01:01:15 +0000 (0:00:27.932) 0:02:36.946 ********** 2026-04-10 01:01:50.945298 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:01:50.945305 | orchestrator | 2026-04-10 01:01:50.945316 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-10 01:01:50.945323 | orchestrator | Friday 10 April 2026 01:01:18 +0000 (0:00:02.995) 0:02:39.941 ********** 2026-04-10 01:01:50.945330 | orchestrator | 2026-04-10 01:01:50.945336 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-10 01:01:50.945343 | orchestrator | Friday 10 April 2026 01:01:18 +0000 (0:00:00.090) 0:02:40.032 ********** 2026-04-10 01:01:50.945350 | orchestrator | 2026-04-10 01:01:50.945357 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-10 01:01:50.945367 | orchestrator | Friday 10 April 2026 01:01:18 +0000 (0:00:00.071) 0:02:40.103 ********** 2026-04-10 01:01:50.945374 | orchestrator | 2026-04-10 01:01:50.945381 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-04-10 01:01:50.945387 | orchestrator | Friday 10 April 2026 01:01:18 +0000 (0:00:00.064) 0:02:40.168 ********** 2026-04-10 01:01:50.945394 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:01:50.945401 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:01:50.945407 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:01:50.945413 | orchestrator | 2026-04-10 01:01:50.945419 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 01:01:50.945426 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2026-04-10 01:01:50.945434 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-10 01:01:50.945440 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-10 01:01:50.945447 | orchestrator | 2026-04-10 01:01:50.945453 | orchestrator | 2026-04-10 01:01:50.945460 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 01:01:50.945467 | orchestrator | Friday 10 April 2026 01:01:48 +0000 (0:00:30.381) 0:03:10.549 ********** 2026-04-10 01:01:50.945473 | orchestrator | =============================================================================== 2026-04-10 01:01:50.945480 | orchestrator | glance : Restart glance-api container ---------------------------------- 30.38s 2026-04-10 01:01:50.945487 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 27.93s 2026-04-10 01:01:50.945493 | orchestrator | service-ks-register : glance | Creating services ----------------------- 16.23s 2026-04-10 01:01:50.945500 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.86s 2026-04-10 01:01:50.945511 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.73s 2026-04-10 01:01:50.945518 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 5.83s 2026-04-10 01:01:50.945525 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 5.33s 2026-04-10 01:01:50.945531 | orchestrator | glance : Ensuring config directories exist ------------------------------ 5.16s 2026-04-10 01:01:50.945538 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.99s 2026-04-10 01:01:50.945544 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.75s 2026-04-10 01:01:50.945551 | orchestrator | glance : Copying over config.json files for services -------------------- 4.57s 2026-04-10 01:01:50.945557 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.49s 2026-04-10 01:01:50.945564 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.47s 2026-04-10 01:01:50.945570 | orchestrator | glance : Check glance containers ---------------------------------------- 4.43s 2026-04-10 01:01:50.945577 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 4.31s 2026-04-10 01:01:50.945584 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 4.26s 2026-04-10 01:01:50.945590 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.19s 2026-04-10 01:01:50.945597 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.69s 2026-04-10 01:01:50.945604 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.69s 2026-04-10 01:01:50.945611 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.64s 2026-04-10 01:01:50.945618 | orchestrator | 2026-04-10 01:01:50 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:01:50.945625 | orchestrator | 2026-04-10 01:01:50 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:01:50.945632 | orchestrator | 2026-04-10 01:01:50 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:01:53.997387 | orchestrator | 2026-04-10 01:01:53 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:01:53.999662 | orchestrator | 2026-04-10 01:01:53 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:01:54.001527 | orchestrator | 2026-04-10 01:01:54 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:01:54.002961 | orchestrator | 2026-04-10 01:01:54 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:01:54.003013 | orchestrator | 2026-04-10 01:01:54 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:01:57.050093 | orchestrator | 2026-04-10 01:01:57 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:01:57.051444 | orchestrator | 2026-04-10 01:01:57 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:01:57.052692 | orchestrator | 2026-04-10 01:01:57 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:01:57.055266 | orchestrator | 2026-04-10 01:01:57 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:01:57.055383 | orchestrator | 2026-04-10 01:01:57 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:02:00.095638 | orchestrator | 2026-04-10 01:02:00 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:02:00.097353 | orchestrator | 2026-04-10 01:02:00 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:02:00.098999 | orchestrator | 2026-04-10 01:02:00 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:02:00.099915 | orchestrator | 2026-04-10 01:02:00 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:02:00.099951 | orchestrator | 2026-04-10 01:02:00 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:02:03.140165 | orchestrator | 2026-04-10 01:02:03 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:02:03.140221 | orchestrator | 2026-04-10 01:02:03 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:02:03.140922 | orchestrator | 2026-04-10 01:02:03 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:02:03.141457 | orchestrator | 2026-04-10 01:02:03 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:02:03.141474 | orchestrator | 2026-04-10 01:02:03 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:02:06.165616 | orchestrator | 2026-04-10 01:02:06 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:02:06.165905 | orchestrator | 2026-04-10 01:02:06 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:02:06.166678 | orchestrator | 2026-04-10 01:02:06 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:02:06.167299 | orchestrator | 2026-04-10 01:02:06 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:02:06.167441 | orchestrator | 2026-04-10 01:02:06 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:02:09.203711 | orchestrator | 2026-04-10 01:02:09 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:02:09.205471 | orchestrator | 2026-04-10 01:02:09 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state STARTED 2026-04-10 01:02:09.210186 | orchestrator | 2026-04-10 01:02:09 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:02:09.213428 | orchestrator | 2026-04-10 01:02:09 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:02:09.213469 | orchestrator | 2026-04-10 01:02:09 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:02:12.322813 | orchestrator | 2026-04-10 01:02:12 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:02:12.324416 | orchestrator | 2026-04-10 01:02:12 | INFO  | Task dc56e433-594b-4e83-aa6c-6d2a36271a42 is in state SUCCESS 2026-04-10 01:02:12.325700 | orchestrator | 2026-04-10 01:02:12.325732 | orchestrator | 2026-04-10 01:02:12.325755 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-10 01:02:12.325766 | orchestrator | 2026-04-10 01:02:12.325775 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-10 01:02:12.325782 | orchestrator | Friday 10 April 2026 00:59:14 +0000 (0:00:00.295) 0:00:00.295 ********** 2026-04-10 01:02:12.325788 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:02:12.325795 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:02:12.325802 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:02:12.325808 | orchestrator | 2026-04-10 01:02:12.325815 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-10 01:02:12.325821 | orchestrator | Friday 10 April 2026 00:59:14 +0000 (0:00:00.290) 0:00:00.586 ********** 2026-04-10 01:02:12.325827 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-04-10 01:02:12.325834 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-04-10 01:02:12.325840 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-04-10 01:02:12.325847 | orchestrator | 2026-04-10 01:02:12.325854 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-04-10 01:02:12.325860 | orchestrator | 2026-04-10 01:02:12.325867 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-10 01:02:12.325890 | orchestrator | Friday 10 April 2026 00:59:14 +0000 (0:00:00.261) 0:00:00.848 ********** 2026-04-10 01:02:12.325898 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 01:02:12.325904 | orchestrator | 2026-04-10 01:02:12.325911 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-04-10 01:02:12.325918 | orchestrator | Friday 10 April 2026 00:59:15 +0000 (0:00:00.729) 0:00:01.577 ********** 2026-04-10 01:02:12.325925 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-04-10 01:02:12.325932 | orchestrator | 2026-04-10 01:02:12.325939 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-04-10 01:02:12.325946 | orchestrator | Friday 10 April 2026 00:59:18 +0000 (0:00:03.661) 0:00:05.239 ********** 2026-04-10 01:02:12.326001 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-04-10 01:02:12.326144 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-04-10 01:02:12.326156 | orchestrator | 2026-04-10 01:02:12.326336 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-04-10 01:02:12.326351 | orchestrator | Friday 10 April 2026 00:59:26 +0000 (0:00:07.796) 0:00:13.037 ********** 2026-04-10 01:02:12.326358 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-10 01:02:12.326367 | orchestrator | 2026-04-10 01:02:12.326372 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-04-10 01:02:12.326376 | orchestrator | Friday 10 April 2026 00:59:30 +0000 (0:00:03.650) 0:00:16.687 ********** 2026-04-10 01:02:12.326381 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-04-10 01:02:12.326385 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-10 01:02:12.326389 | orchestrator | 2026-04-10 01:02:12.326393 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-04-10 01:02:12.326397 | orchestrator | Friday 10 April 2026 00:59:34 +0000 (0:00:04.511) 0:00:21.199 ********** 2026-04-10 01:02:12.326402 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-10 01:02:12.326406 | orchestrator | 2026-04-10 01:02:12.326410 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-04-10 01:02:12.326414 | orchestrator | Friday 10 April 2026 00:59:38 +0000 (0:00:03.587) 0:00:24.786 ********** 2026-04-10 01:02:12.326418 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-04-10 01:02:12.326423 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-04-10 01:02:12.326427 | orchestrator | 2026-04-10 01:02:12.326431 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-04-10 01:02:12.326435 | orchestrator | Friday 10 April 2026 00:59:46 +0000 (0:00:08.243) 0:00:33.030 ********** 2026-04-10 01:02:12.326441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-10 01:02:12.326457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.326476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-10 01:02:12.326485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.326490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.326495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.326499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.326512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-10 01:02:12.326517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.326523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.326528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.326532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.326537 | orchestrator | 2026-04-10 01:02:12.326541 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-10 01:02:12.326545 | orchestrator | Friday 10 April 2026 00:59:50 +0000 (0:00:04.016) 0:00:37.046 ********** 2026-04-10 01:02:12.326552 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:02:12.326557 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:02:12.326561 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:02:12.326565 | orchestrator | 2026-04-10 01:02:12.326569 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-10 01:02:12.326574 | orchestrator | Friday 10 April 2026 00:59:51 +0000 (0:00:00.700) 0:00:37.746 ********** 2026-04-10 01:02:12.326578 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 01:02:12.326585 | orchestrator | 2026-04-10 01:02:12.326592 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-04-10 01:02:12.326602 | orchestrator | Friday 10 April 2026 00:59:52 +0000 (0:00:00.570) 0:00:38.317 ********** 2026-04-10 01:02:12.326606 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-04-10 01:02:12.326610 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-04-10 01:02:12.326614 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-04-10 01:02:12.326619 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-04-10 01:02:12.326623 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-04-10 01:02:12.326627 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-04-10 01:02:12.326631 | orchestrator | 2026-04-10 01:02:12.326635 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-04-10 01:02:12.326639 | orchestrator | Friday 10 April 2026 00:59:54 +0000 (0:00:02.253) 0:00:40.571 ********** 2026-04-10 01:02:12.326647 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-10 01:02:12.326655 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-10 01:02:12.326663 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-10 01:02:12.326676 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-10 01:02:12.326684 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-10 01:02:12.326689 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-10 01:02:12.326696 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-10 01:02:12.326700 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-10 01:02:12.326708 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-10 01:02:12.326715 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-10 01:02:12.326721 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-10 01:02:12.326727 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-10 01:02:12.326731 | orchestrator | 2026-04-10 01:02:12.326879 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-04-10 01:02:12.326884 | orchestrator | Friday 10 April 2026 00:59:58 +0000 (0:00:04.067) 0:00:44.639 ********** 2026-04-10 01:02:12.326888 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-10 01:02:12.326896 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-10 01:02:12.326903 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-10 01:02:12.326910 | orchestrator | 2026-04-10 01:02:12.326917 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-04-10 01:02:12.326924 | orchestrator | Friday 10 April 2026 01:00:00 +0000 (0:00:02.069) 0:00:46.709 ********** 2026-04-10 01:02:12.326937 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-04-10 01:02:12.326943 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-04-10 01:02:12.326950 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-04-10 01:02:12.326957 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-04-10 01:02:12.326964 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-04-10 01:02:12.326970 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-04-10 01:02:12.326977 | orchestrator | 2026-04-10 01:02:12.326985 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-04-10 01:02:12.326992 | orchestrator | Friday 10 April 2026 01:00:03 +0000 (0:00:03.056) 0:00:49.765 ********** 2026-04-10 01:02:12.327000 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-10 01:02:12.327005 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-04-10 01:02:12.327009 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-04-10 01:02:12.327014 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-10 01:02:12.327018 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-04-10 01:02:12.327022 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-04-10 01:02:12.327026 | orchestrator | 2026-04-10 01:02:12.327030 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-04-10 01:02:12.327034 | orchestrator | Friday 10 April 2026 01:00:04 +0000 (0:00:01.145) 0:00:50.911 ********** 2026-04-10 01:02:12.327038 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:02:12.327042 | orchestrator | 2026-04-10 01:02:12.327047 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-04-10 01:02:12.327051 | orchestrator | Friday 10 April 2026 01:00:04 +0000 (0:00:00.321) 0:00:51.232 ********** 2026-04-10 01:02:12.327057 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:02:12.327064 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:02:12.327070 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:02:12.327077 | orchestrator | 2026-04-10 01:02:12.327083 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-10 01:02:12.327091 | orchestrator | Friday 10 April 2026 01:00:05 +0000 (0:00:00.329) 0:00:51.562 ********** 2026-04-10 01:02:12.327098 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 01:02:12.327128 | orchestrator | 2026-04-10 01:02:12.327136 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-04-10 01:02:12.327143 | orchestrator | Friday 10 April 2026 01:00:05 +0000 (0:00:00.670) 0:00:52.232 ********** 2026-04-10 01:02:12.327151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-10 01:02:12.327161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-10 01:02:12.327170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-10 01:02:12.327174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.327179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.327187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.327192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.327199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.327206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.327211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.327216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.327224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.327228 | orchestrator | 2026-04-10 01:02:12.327232 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-04-10 01:02:12.327237 | orchestrator | Friday 10 April 2026 01:00:10 +0000 (0:00:04.509) 0:00:56.742 ********** 2026-04-10 01:02:12.327245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-10 01:02:12.327252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-10 01:02:12.327256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-10 01:02:12.327261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-10 01:02:12.327265 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:02:12.327272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-10 01:02:12.327279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-10 01:02:12.327286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-10 01:02:12.327291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-10 01:02:12.327295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-10 01:02:12.327300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-10 01:02:12.327306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-10 01:02:12.327314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-10 01:02:12.327320 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:02:12.327325 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:02:12.327329 | orchestrator | 2026-04-10 01:02:12.327333 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-04-10 01:02:12.327337 | orchestrator | Friday 10 April 2026 01:00:11 +0000 (0:00:01.269) 0:00:58.012 ********** 2026-04-10 01:02:12.327342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-10 01:02:12.327346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-10 01:02:12.327351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-10 01:02:12.327360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-10 01:02:12.327367 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:02:12.327372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-10 01:02:12.327378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-10 01:02:12.327382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-10 01:02:12.327387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-10 01:02:12.327391 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:02:12.327398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-10 01:02:12.327405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-10 01:02:12.327412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-10 01:02:12.327416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-10 01:02:12.327421 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:02:12.327425 | orchestrator | 2026-04-10 01:02:12.327429 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-04-10 01:02:12.327433 | orchestrator | Friday 10 April 2026 01:00:13 +0000 (0:00:01.457) 0:00:59.469 ********** 2026-04-10 01:02:12.327438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-10 01:02:12.327445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-10 01:02:12.327452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-10 01:02:12.327458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.327463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.327467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.327471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.327478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.327485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.327491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.327496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.327500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.327505 | orchestrator | 2026-04-10 01:02:12.327509 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-04-10 01:02:12.327514 | orchestrator | Friday 10 April 2026 01:00:18 +0000 (0:00:05.306) 0:01:04.776 ********** 2026-04-10 01:02:12.327519 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-10 01:02:12.327524 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-10 01:02:12.327528 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-10 01:02:12.327533 | orchestrator | 2026-04-10 01:02:12.327538 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-04-10 01:02:12.327546 | orchestrator | Friday 10 April 2026 01:00:22 +0000 (0:00:03.709) 0:01:08.486 ********** 2026-04-10 01:02:12.327554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-10 01:02:12.327559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-10 01:02:12.327568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-10 01:02:12.327573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.327578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.327586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.327595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.327600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.327607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.327613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.327618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.327625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.327631 | orchestrator | 2026-04-10 01:02:12.327636 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-04-10 01:02:12.327640 | orchestrator | Friday 10 April 2026 01:00:36 +0000 (0:00:14.325) 0:01:22.811 ********** 2026-04-10 01:02:12.327644 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:02:12.327648 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:02:12.327653 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:02:12.327657 | orchestrator | 2026-04-10 01:02:12.327661 | orchestrator | TASK [cinder : Generating 'hostid' file for cinder_volume] ********************* 2026-04-10 01:02:12.327665 | orchestrator | Friday 10 April 2026 01:00:38 +0000 (0:00:02.016) 0:01:24.828 ********** 2026-04-10 01:02:12.327669 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:02:12.327673 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:02:12.327677 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:02:12.327681 | orchestrator | 2026-04-10 01:02:12.327686 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-04-10 01:02:12.327690 | orchestrator | Friday 10 April 2026 01:00:39 +0000 (0:00:01.445) 0:01:26.273 ********** 2026-04-10 01:02:12.327696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-10 01:02:12.327701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-10 01:02:12.327705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-10 01:02:12.327712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-10 01:02:12.327718 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:02:12.327729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-10 01:02:12.327766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-10 01:02:12.327774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-10 01:02:12.327778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-10 01:02:12.327785 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:02:12.327790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-10 01:02:12.327794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-10 01:02:12.327801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-10 01:02:12.327806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-10 01:02:12.327810 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:02:12.327814 | orchestrator | 2026-04-10 01:02:12.327819 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-04-10 01:02:12.327825 | orchestrator | Friday 10 April 2026 01:00:40 +0000 (0:00:00.795) 0:01:27.068 ********** 2026-04-10 01:02:12.327830 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:02:12.327837 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:02:12.327847 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:02:12.327854 | orchestrator | 2026-04-10 01:02:12.327860 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-04-10 01:02:12.327865 | orchestrator | Friday 10 April 2026 01:00:41 +0000 (0:00:00.258) 0:01:27.327 ********** 2026-04-10 01:02:12.327872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-10 01:02:12.327883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-10 01:02:12.327895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-10 01:02:12.327902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.327912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.327919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.327930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.327938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.327949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.327957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.327967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.327975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-10 01:02:12.327994 | orchestrator | 2026-04-10 01:02:12.327999 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-10 01:02:12.328003 | orchestrator | Friday 10 April 2026 01:00:44 +0000 (0:00:03.502) 0:01:30.829 ********** 2026-04-10 01:02:12.328007 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:02:12.328011 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:02:12.328015 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:02:12.328019 | orchestrator | 2026-04-10 01:02:12.328023 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-04-10 01:02:12.328092 | orchestrator | Friday 10 April 2026 01:00:44 +0000 (0:00:00.403) 0:01:31.233 ********** 2026-04-10 01:02:12.328097 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:02:12.328101 | orchestrator | 2026-04-10 01:02:12.328105 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-04-10 01:02:12.328109 | orchestrator | Friday 10 April 2026 01:00:47 +0000 (0:00:02.330) 0:01:33.564 ********** 2026-04-10 01:02:12.328113 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:02:12.328117 | orchestrator | 2026-04-10 01:02:12.328121 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-04-10 01:02:12.328125 | orchestrator | Friday 10 April 2026 01:00:49 +0000 (0:00:02.185) 0:01:35.749 ********** 2026-04-10 01:02:12.328129 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:02:12.328133 | orchestrator | 2026-04-10 01:02:12.328137 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-10 01:02:12.328141 | orchestrator | Friday 10 April 2026 01:01:10 +0000 (0:00:21.287) 0:01:57.036 ********** 2026-04-10 01:02:12.328145 | orchestrator | 2026-04-10 01:02:12.328149 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-10 01:02:12.328154 | orchestrator | Friday 10 April 2026 01:01:10 +0000 (0:00:00.065) 0:01:57.101 ********** 2026-04-10 01:02:12.328158 | orchestrator | 2026-04-10 01:02:12.328162 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-10 01:02:12.328166 | orchestrator | Friday 10 April 2026 01:01:10 +0000 (0:00:00.065) 0:01:57.167 ********** 2026-04-10 01:02:12.328170 | orchestrator | 2026-04-10 01:02:12.328174 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-04-10 01:02:12.328178 | orchestrator | Friday 10 April 2026 01:01:10 +0000 (0:00:00.069) 0:01:57.237 ********** 2026-04-10 01:02:12.328182 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:02:12.328186 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:02:12.328190 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:02:12.328194 | orchestrator | 2026-04-10 01:02:12.328198 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-04-10 01:02:12.328206 | orchestrator | Friday 10 April 2026 01:01:36 +0000 (0:00:25.696) 0:02:22.933 ********** 2026-04-10 01:02:12.328210 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:02:12.328214 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:02:12.328218 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:02:12.328225 | orchestrator | 2026-04-10 01:02:12.328232 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-04-10 01:02:12.328238 | orchestrator | Friday 10 April 2026 01:01:46 +0000 (0:00:10.135) 0:02:33.069 ********** 2026-04-10 01:02:12.328245 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:02:12.328251 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:02:12.328264 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:02:12.328271 | orchestrator | 2026-04-10 01:02:12.328279 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-04-10 01:02:12.328284 | orchestrator | Friday 10 April 2026 01:02:05 +0000 (0:00:18.686) 0:02:51.755 ********** 2026-04-10 01:02:12.328288 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:02:12.328292 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:02:12.328296 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:02:12.328300 | orchestrator | 2026-04-10 01:02:12.328304 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-04-10 01:02:12.328308 | orchestrator | Friday 10 April 2026 01:02:11 +0000 (0:00:05.598) 0:02:57.354 ********** 2026-04-10 01:02:12.328312 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:02:12.328316 | orchestrator | 2026-04-10 01:02:12.328321 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 01:02:12.328326 | orchestrator | testbed-node-0 : ok=31  changed=23  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-10 01:02:12.328330 | orchestrator | testbed-node-1 : ok=22  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-10 01:02:12.328339 | orchestrator | testbed-node-2 : ok=22  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-10 01:02:12.328343 | orchestrator | 2026-04-10 01:02:12.328347 | orchestrator | 2026-04-10 01:02:12.328351 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 01:02:12.328356 | orchestrator | Friday 10 April 2026 01:02:11 +0000 (0:00:00.225) 0:02:57.579 ********** 2026-04-10 01:02:12.328360 | orchestrator | =============================================================================== 2026-04-10 01:02:12.328364 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 25.70s 2026-04-10 01:02:12.328368 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 21.29s 2026-04-10 01:02:12.328372 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 18.69s 2026-04-10 01:02:12.328376 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 14.33s 2026-04-10 01:02:12.328380 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.14s 2026-04-10 01:02:12.328385 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.24s 2026-04-10 01:02:12.328389 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.80s 2026-04-10 01:02:12.328393 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 5.60s 2026-04-10 01:02:12.328397 | orchestrator | cinder : Copying over config.json files for services -------------------- 5.31s 2026-04-10 01:02:12.328401 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.51s 2026-04-10 01:02:12.328405 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.51s 2026-04-10 01:02:12.328409 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.07s 2026-04-10 01:02:12.328413 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 4.02s 2026-04-10 01:02:12.328417 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 3.71s 2026-04-10 01:02:12.328421 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.66s 2026-04-10 01:02:12.328425 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.65s 2026-04-10 01:02:12.328430 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.59s 2026-04-10 01:02:12.328434 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.50s 2026-04-10 01:02:12.328438 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.06s 2026-04-10 01:02:12.328442 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.33s 2026-04-10 01:02:12.328449 | orchestrator | 2026-04-10 01:02:12 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:02:12.328453 | orchestrator | 2026-04-10 01:02:12 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:02:12.328458 | orchestrator | 2026-04-10 01:02:12 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:02:15.529329 | orchestrator | 2026-04-10 01:02:15 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:02:15.529973 | orchestrator | 2026-04-10 01:02:15 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:02:15.530967 | orchestrator | 2026-04-10 01:02:15 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:02:15.531762 | orchestrator | 2026-04-10 01:02:15 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:02:15.531797 | orchestrator | 2026-04-10 01:02:15 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:02:18.567834 | orchestrator | 2026-04-10 01:02:18 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:02:18.567889 | orchestrator | 2026-04-10 01:02:18 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:02:18.568419 | orchestrator | 2026-04-10 01:02:18 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:02:18.569251 | orchestrator | 2026-04-10 01:02:18 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:02:18.569267 | orchestrator | 2026-04-10 01:02:18 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:02:21.600016 | orchestrator | 2026-04-10 01:02:21 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:02:21.601794 | orchestrator | 2026-04-10 01:02:21 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:02:21.603416 | orchestrator | 2026-04-10 01:02:21 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:02:21.605064 | orchestrator | 2026-04-10 01:02:21 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:02:21.605107 | orchestrator | 2026-04-10 01:02:21 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:02:24.632547 | orchestrator | 2026-04-10 01:02:24 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:02:24.634652 | orchestrator | 2026-04-10 01:02:24 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:02:24.636280 | orchestrator | 2026-04-10 01:02:24 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:02:24.637978 | orchestrator | 2026-04-10 01:02:24 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:02:24.638050 | orchestrator | 2026-04-10 01:02:24 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:02:27.668826 | orchestrator | 2026-04-10 01:02:27 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:02:27.669125 | orchestrator | 2026-04-10 01:02:27 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:02:27.669978 | orchestrator | 2026-04-10 01:02:27 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:02:27.670673 | orchestrator | 2026-04-10 01:02:27 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:02:27.670692 | orchestrator | 2026-04-10 01:02:27 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:02:30.700298 | orchestrator | 2026-04-10 01:02:30 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:02:30.700492 | orchestrator | 2026-04-10 01:02:30 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:02:30.701398 | orchestrator | 2026-04-10 01:02:30 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:02:30.702274 | orchestrator | 2026-04-10 01:02:30 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:02:30.702321 | orchestrator | 2026-04-10 01:02:30 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:02:33.734176 | orchestrator | 2026-04-10 01:02:33 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:02:33.736207 | orchestrator | 2026-04-10 01:02:33 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:02:33.736827 | orchestrator | 2026-04-10 01:02:33 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:02:33.737468 | orchestrator | 2026-04-10 01:02:33 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:02:33.737550 | orchestrator | 2026-04-10 01:02:33 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:02:36.759989 | orchestrator | 2026-04-10 01:02:36 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:02:36.760600 | orchestrator | 2026-04-10 01:02:36 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:02:36.761061 | orchestrator | 2026-04-10 01:02:36 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:02:36.761884 | orchestrator | 2026-04-10 01:02:36 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:02:36.762633 | orchestrator | 2026-04-10 01:02:36 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:02:39.807793 | orchestrator | 2026-04-10 01:02:39 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:02:39.825300 | orchestrator | 2026-04-10 01:02:39 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:02:39.825947 | orchestrator | 2026-04-10 01:02:39 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:02:39.831489 | orchestrator | 2026-04-10 01:02:39 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:02:39.831536 | orchestrator | 2026-04-10 01:02:39 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:02:42.858177 | orchestrator | 2026-04-10 01:02:42 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:02:42.858771 | orchestrator | 2026-04-10 01:02:42 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:02:42.859657 | orchestrator | 2026-04-10 01:02:42 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:02:42.861038 | orchestrator | 2026-04-10 01:02:42 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:02:42.861563 | orchestrator | 2026-04-10 01:02:42 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:02:45.894885 | orchestrator | 2026-04-10 01:02:45 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:02:45.938762 | orchestrator | 2026-04-10 01:02:45 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:02:45.938855 | orchestrator | 2026-04-10 01:02:45 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:02:45.938866 | orchestrator | 2026-04-10 01:02:45 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:02:45.938899 | orchestrator | 2026-04-10 01:02:45 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:02:48.936543 | orchestrator | 2026-04-10 01:02:48 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:02:48.937813 | orchestrator | 2026-04-10 01:02:48 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:02:48.938519 | orchestrator | 2026-04-10 01:02:48 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:02:48.939218 | orchestrator | 2026-04-10 01:02:48 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:02:48.939244 | orchestrator | 2026-04-10 01:02:48 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:02:51.967837 | orchestrator | 2026-04-10 01:02:51 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:02:51.968999 | orchestrator | 2026-04-10 01:02:51 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:02:51.971901 | orchestrator | 2026-04-10 01:02:51 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:02:51.972446 | orchestrator | 2026-04-10 01:02:51 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:02:51.972476 | orchestrator | 2026-04-10 01:02:51 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:02:55.016403 | orchestrator | 2026-04-10 01:02:55 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:02:55.017085 | orchestrator | 2026-04-10 01:02:55 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:02:55.017739 | orchestrator | 2026-04-10 01:02:55 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:02:55.018542 | orchestrator | 2026-04-10 01:02:55 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:02:55.019566 | orchestrator | 2026-04-10 01:02:55 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:02:58.042827 | orchestrator | 2026-04-10 01:02:58 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:02:58.043178 | orchestrator | 2026-04-10 01:02:58 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:02:58.043868 | orchestrator | 2026-04-10 01:02:58 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:02:58.044455 | orchestrator | 2026-04-10 01:02:58 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:02:58.044482 | orchestrator | 2026-04-10 01:02:58 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:03:01.067248 | orchestrator | 2026-04-10 01:03:01 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:03:01.069984 | orchestrator | 2026-04-10 01:03:01 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:03:01.070521 | orchestrator | 2026-04-10 01:03:01 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:03:01.071288 | orchestrator | 2026-04-10 01:03:01 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:03:01.071316 | orchestrator | 2026-04-10 01:03:01 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:03:04.092703 | orchestrator | 2026-04-10 01:03:04 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:03:04.092967 | orchestrator | 2026-04-10 01:03:04 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:03:04.093988 | orchestrator | 2026-04-10 01:03:04 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:03:04.095756 | orchestrator | 2026-04-10 01:03:04 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:03:04.095801 | orchestrator | 2026-04-10 01:03:04 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:03:07.119558 | orchestrator | 2026-04-10 01:03:07 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:03:07.121940 | orchestrator | 2026-04-10 01:03:07 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:03:07.122473 | orchestrator | 2026-04-10 01:03:07 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:03:07.123275 | orchestrator | 2026-04-10 01:03:07 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:03:07.123310 | orchestrator | 2026-04-10 01:03:07 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:03:10.158246 | orchestrator | 2026-04-10 01:03:10 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:03:10.158877 | orchestrator | 2026-04-10 01:03:10 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:03:10.159567 | orchestrator | 2026-04-10 01:03:10 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:03:10.160104 | orchestrator | 2026-04-10 01:03:10 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:03:10.160131 | orchestrator | 2026-04-10 01:03:10 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:03:13.194157 | orchestrator | 2026-04-10 01:03:13 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:03:13.194238 | orchestrator | 2026-04-10 01:03:13 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:03:13.195024 | orchestrator | 2026-04-10 01:03:13 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:03:13.195624 | orchestrator | 2026-04-10 01:03:13 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:03:13.195717 | orchestrator | 2026-04-10 01:03:13 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:03:16.231029 | orchestrator | 2026-04-10 01:03:16 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:03:16.231111 | orchestrator | 2026-04-10 01:03:16 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:03:16.231810 | orchestrator | 2026-04-10 01:03:16 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:03:16.231867 | orchestrator | 2026-04-10 01:03:16 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:03:16.231875 | orchestrator | 2026-04-10 01:03:16 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:03:19.263933 | orchestrator | 2026-04-10 01:03:19 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:03:19.264021 | orchestrator | 2026-04-10 01:03:19 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:03:19.264572 | orchestrator | 2026-04-10 01:03:19 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:03:19.265143 | orchestrator | 2026-04-10 01:03:19 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:03:19.265178 | orchestrator | 2026-04-10 01:03:19 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:03:22.312414 | orchestrator | 2026-04-10 01:03:22 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:03:22.312606 | orchestrator | 2026-04-10 01:03:22 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:03:22.313450 | orchestrator | 2026-04-10 01:03:22 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:03:22.314048 | orchestrator | 2026-04-10 01:03:22 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:03:22.314091 | orchestrator | 2026-04-10 01:03:22 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:03:25.336068 | orchestrator | 2026-04-10 01:03:25 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:03:25.336153 | orchestrator | 2026-04-10 01:03:25 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:03:25.336593 | orchestrator | 2026-04-10 01:03:25 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:03:25.337361 | orchestrator | 2026-04-10 01:03:25 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:03:25.337416 | orchestrator | 2026-04-10 01:03:25 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:03:28.364022 | orchestrator | 2026-04-10 01:03:28 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:03:28.364567 | orchestrator | 2026-04-10 01:03:28 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:03:28.365067 | orchestrator | 2026-04-10 01:03:28 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:03:28.367015 | orchestrator | 2026-04-10 01:03:28 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:03:28.367060 | orchestrator | 2026-04-10 01:03:28 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:03:31.439288 | orchestrator | 2026-04-10 01:03:31 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:03:31.439703 | orchestrator | 2026-04-10 01:03:31 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:03:31.440590 | orchestrator | 2026-04-10 01:03:31 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:03:31.441495 | orchestrator | 2026-04-10 01:03:31 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:03:31.441530 | orchestrator | 2026-04-10 01:03:31 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:03:34.469189 | orchestrator | 2026-04-10 01:03:34 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:03:34.470065 | orchestrator | 2026-04-10 01:03:34 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:03:34.471575 | orchestrator | 2026-04-10 01:03:34 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:03:34.471632 | orchestrator | 2026-04-10 01:03:34 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:03:34.471689 | orchestrator | 2026-04-10 01:03:34 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:03:37.497789 | orchestrator | 2026-04-10 01:03:37 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:03:37.498093 | orchestrator | 2026-04-10 01:03:37 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:03:37.498991 | orchestrator | 2026-04-10 01:03:37 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:03:37.499473 | orchestrator | 2026-04-10 01:03:37 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:03:37.499513 | orchestrator | 2026-04-10 01:03:37 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:03:40.542187 | orchestrator | 2026-04-10 01:03:40 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:03:40.543505 | orchestrator | 2026-04-10 01:03:40 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:03:40.545125 | orchestrator | 2026-04-10 01:03:40 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:03:40.548501 | orchestrator | 2026-04-10 01:03:40 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:03:40.548617 | orchestrator | 2026-04-10 01:03:40 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:03:43.578987 | orchestrator | 2026-04-10 01:03:43 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:03:43.579428 | orchestrator | 2026-04-10 01:03:43 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:03:43.580170 | orchestrator | 2026-04-10 01:03:43 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:03:43.580935 | orchestrator | 2026-04-10 01:03:43 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:03:43.580963 | orchestrator | 2026-04-10 01:03:43 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:03:46.634585 | orchestrator | 2026-04-10 01:03:46 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:03:46.635813 | orchestrator | 2026-04-10 01:03:46 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:03:46.637084 | orchestrator | 2026-04-10 01:03:46 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:03:46.638407 | orchestrator | 2026-04-10 01:03:46 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:03:46.638463 | orchestrator | 2026-04-10 01:03:46 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:03:49.685566 | orchestrator | 2026-04-10 01:03:49 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:03:49.687715 | orchestrator | 2026-04-10 01:03:49 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:03:49.690151 | orchestrator | 2026-04-10 01:03:49 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:03:49.691868 | orchestrator | 2026-04-10 01:03:49 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:03:49.691984 | orchestrator | 2026-04-10 01:03:49 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:03:52.745791 | orchestrator | 2026-04-10 01:03:52 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:03:52.747124 | orchestrator | 2026-04-10 01:03:52 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:03:52.749365 | orchestrator | 2026-04-10 01:03:52 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state STARTED 2026-04-10 01:03:52.751210 | orchestrator | 2026-04-10 01:03:52 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:03:52.751254 | orchestrator | 2026-04-10 01:03:52 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:03:55.795646 | orchestrator | 2026-04-10 01:03:55 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:03:55.798686 | orchestrator | 2026-04-10 01:03:55 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:03:55.799296 | orchestrator | 2026-04-10 01:03:55 | INFO  | Task 76a08406-1f2e-47f6-8193-869512ed104d is in state STARTED 2026-04-10 01:03:55.802329 | orchestrator | 2026-04-10 01:03:55 | INFO  | Task 72b30888-d71e-47e2-af3f-6c1e1295209e is in state SUCCESS 2026-04-10 01:03:55.803436 | orchestrator | 2026-04-10 01:03:55.803465 | orchestrator | 2026-04-10 01:03:55.803475 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-10 01:03:55.803485 | orchestrator | 2026-04-10 01:03:55.803536 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-10 01:03:55.803550 | orchestrator | Friday 10 April 2026 01:01:52 +0000 (0:00:00.281) 0:00:00.281 ********** 2026-04-10 01:03:55.803570 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:03:55.803578 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:03:55.803588 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:03:55.803594 | orchestrator | 2026-04-10 01:03:55.803600 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-10 01:03:55.803605 | orchestrator | Friday 10 April 2026 01:01:52 +0000 (0:00:00.255) 0:00:00.536 ********** 2026-04-10 01:03:55.803611 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-04-10 01:03:55.803617 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-04-10 01:03:55.803651 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-04-10 01:03:55.803657 | orchestrator | 2026-04-10 01:03:55.803663 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-04-10 01:03:55.803668 | orchestrator | 2026-04-10 01:03:55.803674 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-10 01:03:55.803680 | orchestrator | Friday 10 April 2026 01:01:53 +0000 (0:00:00.328) 0:00:00.864 ********** 2026-04-10 01:03:55.803686 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 01:03:55.803692 | orchestrator | 2026-04-10 01:03:55.803698 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-04-10 01:03:55.803703 | orchestrator | Friday 10 April 2026 01:01:53 +0000 (0:00:00.544) 0:00:01.409 ********** 2026-04-10 01:03:55.803709 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-04-10 01:03:55.803714 | orchestrator | 2026-04-10 01:03:55.803757 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-04-10 01:03:55.803766 | orchestrator | Friday 10 April 2026 01:01:57 +0000 (0:00:03.470) 0:00:04.880 ********** 2026-04-10 01:03:55.803772 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-04-10 01:03:55.803806 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-04-10 01:03:55.803813 | orchestrator | 2026-04-10 01:03:55.803819 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-04-10 01:03:55.803825 | orchestrator | Friday 10 April 2026 01:02:03 +0000 (0:00:06.782) 0:00:11.662 ********** 2026-04-10 01:03:55.803830 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-10 01:03:55.803852 | orchestrator | 2026-04-10 01:03:55.803858 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-04-10 01:03:55.803863 | orchestrator | Friday 10 April 2026 01:02:07 +0000 (0:00:03.210) 0:00:14.872 ********** 2026-04-10 01:03:55.804021 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-04-10 01:03:55.804028 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-10 01:03:55.804033 | orchestrator | 2026-04-10 01:03:55.804039 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-04-10 01:03:55.804046 | orchestrator | Friday 10 April 2026 01:02:11 +0000 (0:00:04.124) 0:00:18.997 ********** 2026-04-10 01:03:55.804055 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-10 01:03:55.804069 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-04-10 01:03:55.804079 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-04-10 01:03:55.804108 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-04-10 01:03:55.804117 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-04-10 01:03:55.804126 | orchestrator | 2026-04-10 01:03:55.804135 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-04-10 01:03:55.804144 | orchestrator | Friday 10 April 2026 01:02:26 +0000 (0:00:15.123) 0:00:34.121 ********** 2026-04-10 01:03:55.804215 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-04-10 01:03:55.804227 | orchestrator | 2026-04-10 01:03:55.804249 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-04-10 01:03:55.804259 | orchestrator | Friday 10 April 2026 01:02:30 +0000 (0:00:04.560) 0:00:38.682 ********** 2026-04-10 01:03:55.804272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-10 01:03:55.804295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-10 01:03:55.804305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-10 01:03:55.804315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:03:55.804325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-10 01:03:55.804349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-10 01:03:55.804368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:03:55.804379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-10 01:03:55.804388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:03:55.804398 | orchestrator | 2026-04-10 01:03:55.804407 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-04-10 01:03:55.804416 | orchestrator | Friday 10 April 2026 01:02:34 +0000 (0:00:03.506) 0:00:42.189 ********** 2026-04-10 01:03:55.804426 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-04-10 01:03:55.804435 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-04-10 01:03:55.804444 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-04-10 01:03:55.804453 | orchestrator | 2026-04-10 01:03:55.804463 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-04-10 01:03:55.804481 | orchestrator | Friday 10 April 2026 01:02:35 +0000 (0:00:01.120) 0:00:43.309 ********** 2026-04-10 01:03:55.804491 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:03:55.804500 | orchestrator | 2026-04-10 01:03:55.804510 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-04-10 01:03:55.804519 | orchestrator | Friday 10 April 2026 01:02:35 +0000 (0:00:00.111) 0:00:43.421 ********** 2026-04-10 01:03:55.804528 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:03:55.804534 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:03:55.804539 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:03:55.804545 | orchestrator | 2026-04-10 01:03:55.804550 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-10 01:03:55.804555 | orchestrator | Friday 10 April 2026 01:02:35 +0000 (0:00:00.283) 0:00:43.704 ********** 2026-04-10 01:03:55.804561 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 01:03:55.804567 | orchestrator | 2026-04-10 01:03:55.804572 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-04-10 01:03:55.804577 | orchestrator | Friday 10 April 2026 01:02:37 +0000 (0:00:01.191) 0:00:44.896 ********** 2026-04-10 01:03:55.804588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-10 01:03:55.804600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-10 01:03:55.804607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-10 01:03:55.804632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-10 01:03:55.804641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-10 01:03:55.804650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-10 01:03:55.804656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:03:55.804672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:03:55.804682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:03:55.804691 | orchestrator | 2026-04-10 01:03:55.804700 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-04-10 01:03:55.804708 | orchestrator | Friday 10 April 2026 01:02:40 +0000 (0:00:03.335) 0:00:48.231 ********** 2026-04-10 01:03:55.804724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-10 01:03:55.804735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-10 01:03:55.804750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-10 01:03:55.804760 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:03:55.804776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-10 01:03:55.804799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-10 01:03:55.804805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-10 01:03:55.804818 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:03:55.804824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-10 01:03:55.804833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-10 01:03:55.804838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-10 01:03:55.804844 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:03:55.804850 | orchestrator | 2026-04-10 01:03:55.804855 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-04-10 01:03:55.804861 | orchestrator | Friday 10 April 2026 01:02:41 +0000 (0:00:00.925) 0:00:49.157 ********** 2026-04-10 01:03:55.804872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-10 01:03:55.804882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-10 01:03:55.804888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-10 01:03:55.804893 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:03:55.804902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-10 01:03:55.804924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-10 01:03:55.804937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-10 01:03:55.804947 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:03:55.804963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-10 01:03:55.804980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-10 01:03:55.804990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-10 01:03:55.804998 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:03:55.805008 | orchestrator | 2026-04-10 01:03:55.805016 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-04-10 01:03:55.805024 | orchestrator | Friday 10 April 2026 01:02:43 +0000 (0:00:01.975) 0:00:51.132 ********** 2026-04-10 01:03:55.805037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-10 01:03:55.805051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-10 01:03:55.805066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-10 01:03:55.805075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-10 01:03:55.805085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-10 01:03:55.805098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-10 01:03:55.805108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:03:55.805121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:03:55.805136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:03:55.805145 | orchestrator | 2026-04-10 01:03:55.805155 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-04-10 01:03:55.805165 | orchestrator | Friday 10 April 2026 01:02:47 +0000 (0:00:04.392) 0:00:55.524 ********** 2026-04-10 01:03:55.805174 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:03:55.805183 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:03:55.805192 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:03:55.805200 | orchestrator | 2026-04-10 01:03:55.805209 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-04-10 01:03:55.805218 | orchestrator | Friday 10 April 2026 01:02:50 +0000 (0:00:02.335) 0:00:57.859 ********** 2026-04-10 01:03:55.805228 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-10 01:03:55.805237 | orchestrator | 2026-04-10 01:03:55.805246 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-04-10 01:03:55.805255 | orchestrator | Friday 10 April 2026 01:02:51 +0000 (0:00:01.260) 0:00:59.119 ********** 2026-04-10 01:03:55.805264 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:03:55.805272 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:03:55.805281 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:03:55.805290 | orchestrator | 2026-04-10 01:03:55.805299 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-04-10 01:03:55.805308 | orchestrator | Friday 10 April 2026 01:02:51 +0000 (0:00:00.465) 0:00:59.585 ********** 2026-04-10 01:03:55.805319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-10 01:03:55.805333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-10 01:03:55.805356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-10 01:03:55.805365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-10 01:03:55.805374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-10 01:03:55.805383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-10 01:03:55.805394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:03:55.805407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:03:55.805421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:03:55.805430 | orchestrator | 2026-04-10 01:03:55.805440 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-04-10 01:03:55.805448 | orchestrator | Friday 10 April 2026 01:03:00 +0000 (0:00:08.732) 0:01:08.318 ********** 2026-04-10 01:03:55.805462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-10 01:03:55.805471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-10 01:03:55.805482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-10 01:03:55.805491 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:03:55.805504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-10 01:03:55.805520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-10 01:03:55.805534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-10 01:03:55.805543 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:03:55.805553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-10 01:03:55.805562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-10 01:03:55.805571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-10 01:03:55.805581 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:03:55.805589 | orchestrator | 2026-04-10 01:03:55.805610 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-04-10 01:03:55.805664 | orchestrator | Friday 10 April 2026 01:03:01 +0000 (0:00:00.925) 0:01:09.243 ********** 2026-04-10 01:03:55.805685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-10 01:03:55.805701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-10 01:03:55.805710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-10 01:03:55.805719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-10 01:03:55.805729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-10 01:03:55.805744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-10 01:03:55.805754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:03:55.805770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:03:55.805779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:03:55.805789 | orchestrator | 2026-04-10 01:03:55.805798 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-10 01:03:55.805807 | orchestrator | Friday 10 April 2026 01:03:03 +0000 (0:00:02.285) 0:01:11.529 ********** 2026-04-10 01:03:55.805816 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:03:55.805826 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:03:55.805835 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:03:55.805844 | orchestrator | 2026-04-10 01:03:55.805853 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-04-10 01:03:55.805862 | orchestrator | Friday 10 April 2026 01:03:04 +0000 (0:00:00.313) 0:01:11.843 ********** 2026-04-10 01:03:55.805871 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:03:55.805880 | orchestrator | 2026-04-10 01:03:55.805982 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-04-10 01:03:55.806001 | orchestrator | Friday 10 April 2026 01:03:06 +0000 (0:00:02.493) 0:01:14.336 ********** 2026-04-10 01:03:55.806010 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:03:55.806067 | orchestrator | 2026-04-10 01:03:55.806076 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-04-10 01:03:55.806085 | orchestrator | Friday 10 April 2026 01:03:09 +0000 (0:00:02.744) 0:01:17.081 ********** 2026-04-10 01:03:55.806094 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:03:55.806109 | orchestrator | 2026-04-10 01:03:55.806118 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-10 01:03:55.806127 | orchestrator | Friday 10 April 2026 01:03:22 +0000 (0:00:13.287) 0:01:30.369 ********** 2026-04-10 01:03:55.806136 | orchestrator | 2026-04-10 01:03:55.806145 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-10 01:03:55.806154 | orchestrator | Friday 10 April 2026 01:03:23 +0000 (0:00:00.486) 0:01:30.856 ********** 2026-04-10 01:03:55.806163 | orchestrator | 2026-04-10 01:03:55.806172 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-10 01:03:55.806181 | orchestrator | Friday 10 April 2026 01:03:23 +0000 (0:00:00.067) 0:01:30.924 ********** 2026-04-10 01:03:55.806191 | orchestrator | 2026-04-10 01:03:55.806199 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-04-10 01:03:55.806208 | orchestrator | Friday 10 April 2026 01:03:23 +0000 (0:00:00.140) 0:01:31.064 ********** 2026-04-10 01:03:55.806216 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:03:55.806224 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:03:55.806232 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:03:55.806241 | orchestrator | 2026-04-10 01:03:55.806250 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-04-10 01:03:55.806259 | orchestrator | Friday 10 April 2026 01:03:31 +0000 (0:00:07.947) 0:01:39.012 ********** 2026-04-10 01:03:55.806267 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:03:55.806276 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:03:55.806285 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:03:55.806294 | orchestrator | 2026-04-10 01:03:55.806307 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-04-10 01:03:55.806316 | orchestrator | Friday 10 April 2026 01:03:42 +0000 (0:00:11.533) 0:01:50.546 ********** 2026-04-10 01:03:55.806325 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:03:55.806333 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:03:55.806342 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:03:55.806350 | orchestrator | 2026-04-10 01:03:55.806358 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 01:03:55.806368 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-10 01:03:55.806378 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-10 01:03:55.806387 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-10 01:03:55.806395 | orchestrator | 2026-04-10 01:03:55.806403 | orchestrator | 2026-04-10 01:03:55.806412 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 01:03:55.806421 | orchestrator | Friday 10 April 2026 01:03:53 +0000 (0:00:11.061) 0:02:01.608 ********** 2026-04-10 01:03:55.806429 | orchestrator | =============================================================================== 2026-04-10 01:03:55.806438 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.12s 2026-04-10 01:03:55.806453 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 13.29s 2026-04-10 01:03:55.806462 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 11.53s 2026-04-10 01:03:55.806470 | orchestrator | barbican : Restart barbican-worker container --------------------------- 11.06s 2026-04-10 01:03:55.806479 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 8.73s 2026-04-10 01:03:55.806488 | orchestrator | barbican : Restart barbican-api container ------------------------------- 7.95s 2026-04-10 01:03:55.806496 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.78s 2026-04-10 01:03:55.806505 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.56s 2026-04-10 01:03:55.806519 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.39s 2026-04-10 01:03:55.806527 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.13s 2026-04-10 01:03:55.806536 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 3.51s 2026-04-10 01:03:55.806545 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.47s 2026-04-10 01:03:55.806553 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.34s 2026-04-10 01:03:55.806562 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.21s 2026-04-10 01:03:55.806571 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.74s 2026-04-10 01:03:55.806579 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.49s 2026-04-10 01:03:55.806588 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.34s 2026-04-10 01:03:55.806597 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.29s 2026-04-10 01:03:55.806605 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.98s 2026-04-10 01:03:55.806614 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.26s 2026-04-10 01:03:55.806636 | orchestrator | 2026-04-10 01:03:55 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:03:55.806645 | orchestrator | 2026-04-10 01:03:55 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:03:58.853074 | orchestrator | 2026-04-10 01:03:58 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:03:58.855300 | orchestrator | 2026-04-10 01:03:58 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:03:58.857312 | orchestrator | 2026-04-10 01:03:58 | INFO  | Task 76a08406-1f2e-47f6-8193-869512ed104d is in state STARTED 2026-04-10 01:03:58.858994 | orchestrator | 2026-04-10 01:03:58 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:03:58.859062 | orchestrator | 2026-04-10 01:03:58 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:04:01.909882 | orchestrator | 2026-04-10 01:04:01 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:04:01.912972 | orchestrator | 2026-04-10 01:04:01 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:04:01.915027 | orchestrator | 2026-04-10 01:04:01 | INFO  | Task 76a08406-1f2e-47f6-8193-869512ed104d is in state STARTED 2026-04-10 01:04:01.917217 | orchestrator | 2026-04-10 01:04:01 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:04:01.917265 | orchestrator | 2026-04-10 01:04:01 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:04:04.977219 | orchestrator | 2026-04-10 01:04:04 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:04:04.979580 | orchestrator | 2026-04-10 01:04:04 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:04:04.980739 | orchestrator | 2026-04-10 01:04:04 | INFO  | Task 76a08406-1f2e-47f6-8193-869512ed104d is in state STARTED 2026-04-10 01:04:04.982101 | orchestrator | 2026-04-10 01:04:04 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:04:04.982181 | orchestrator | 2026-04-10 01:04:04 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:04:08.035452 | orchestrator | 2026-04-10 01:04:08 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:04:08.036948 | orchestrator | 2026-04-10 01:04:08 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:04:08.038718 | orchestrator | 2026-04-10 01:04:08 | INFO  | Task 76a08406-1f2e-47f6-8193-869512ed104d is in state STARTED 2026-04-10 01:04:08.043177 | orchestrator | 2026-04-10 01:04:08 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:04:08.044336 | orchestrator | 2026-04-10 01:04:08 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:04:11.079173 | orchestrator | 2026-04-10 01:04:11 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:04:11.080249 | orchestrator | 2026-04-10 01:04:11 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:04:11.083569 | orchestrator | 2026-04-10 01:04:11 | INFO  | Task 76a08406-1f2e-47f6-8193-869512ed104d is in state STARTED 2026-04-10 01:04:11.089817 | orchestrator | 2026-04-10 01:04:11 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:04:11.090386 | orchestrator | 2026-04-10 01:04:11 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:04:14.123768 | orchestrator | 2026-04-10 01:04:14 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:04:14.123823 | orchestrator | 2026-04-10 01:04:14 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:04:14.125174 | orchestrator | 2026-04-10 01:04:14 | INFO  | Task 76a08406-1f2e-47f6-8193-869512ed104d is in state STARTED 2026-04-10 01:04:14.125813 | orchestrator | 2026-04-10 01:04:14 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:04:14.125847 | orchestrator | 2026-04-10 01:04:14 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:04:17.156750 | orchestrator | 2026-04-10 01:04:17 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:04:17.158101 | orchestrator | 2026-04-10 01:04:17 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:04:17.159905 | orchestrator | 2026-04-10 01:04:17 | INFO  | Task 76a08406-1f2e-47f6-8193-869512ed104d is in state STARTED 2026-04-10 01:04:17.161560 | orchestrator | 2026-04-10 01:04:17 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:04:17.161842 | orchestrator | 2026-04-10 01:04:17 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:04:20.206581 | orchestrator | 2026-04-10 01:04:20 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:04:20.206959 | orchestrator | 2026-04-10 01:04:20 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:04:20.207734 | orchestrator | 2026-04-10 01:04:20 | INFO  | Task 76a08406-1f2e-47f6-8193-869512ed104d is in state STARTED 2026-04-10 01:04:20.208514 | orchestrator | 2026-04-10 01:04:20 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:04:20.208543 | orchestrator | 2026-04-10 01:04:20 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:04:23.250494 | orchestrator | 2026-04-10 01:04:23 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:04:23.250839 | orchestrator | 2026-04-10 01:04:23 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:04:23.251676 | orchestrator | 2026-04-10 01:04:23 | INFO  | Task 76a08406-1f2e-47f6-8193-869512ed104d is in state STARTED 2026-04-10 01:04:23.252531 | orchestrator | 2026-04-10 01:04:23 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:04:23.252553 | orchestrator | 2026-04-10 01:04:23 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:04:26.307306 | orchestrator | 2026-04-10 01:04:26 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:04:26.308466 | orchestrator | 2026-04-10 01:04:26 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:04:26.311962 | orchestrator | 2026-04-10 01:04:26 | INFO  | Task 76a08406-1f2e-47f6-8193-869512ed104d is in state STARTED 2026-04-10 01:04:26.312659 | orchestrator | 2026-04-10 01:04:26 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:04:26.312839 | orchestrator | 2026-04-10 01:04:26 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:04:29.355571 | orchestrator | 2026-04-10 01:04:29 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:04:29.355812 | orchestrator | 2026-04-10 01:04:29 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:04:29.358071 | orchestrator | 2026-04-10 01:04:29 | INFO  | Task 76a08406-1f2e-47f6-8193-869512ed104d is in state STARTED 2026-04-10 01:04:29.358880 | orchestrator | 2026-04-10 01:04:29 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:04:29.359691 | orchestrator | 2026-04-10 01:04:29 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:04:32.422098 | orchestrator | 2026-04-10 01:04:32 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:04:32.425486 | orchestrator | 2026-04-10 01:04:32 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:04:32.425531 | orchestrator | 2026-04-10 01:04:32 | INFO  | Task 76a08406-1f2e-47f6-8193-869512ed104d is in state STARTED 2026-04-10 01:04:32.425535 | orchestrator | 2026-04-10 01:04:32 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:04:32.425539 | orchestrator | 2026-04-10 01:04:32 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:04:35.452929 | orchestrator | 2026-04-10 01:04:35 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:04:35.454161 | orchestrator | 2026-04-10 01:04:35 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:04:35.455533 | orchestrator | 2026-04-10 01:04:35 | INFO  | Task 76a08406-1f2e-47f6-8193-869512ed104d is in state STARTED 2026-04-10 01:04:35.456176 | orchestrator | 2026-04-10 01:04:35 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:04:35.456202 | orchestrator | 2026-04-10 01:04:35 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:04:38.481330 | orchestrator | 2026-04-10 01:04:38 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:04:38.481737 | orchestrator | 2026-04-10 01:04:38 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:04:38.482613 | orchestrator | 2026-04-10 01:04:38 | INFO  | Task 76a08406-1f2e-47f6-8193-869512ed104d is in state STARTED 2026-04-10 01:04:38.484429 | orchestrator | 2026-04-10 01:04:38 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:04:38.484466 | orchestrator | 2026-04-10 01:04:38 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:04:41.524935 | orchestrator | 2026-04-10 01:04:41 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:04:41.527058 | orchestrator | 2026-04-10 01:04:41 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:04:41.529264 | orchestrator | 2026-04-10 01:04:41 | INFO  | Task 76a08406-1f2e-47f6-8193-869512ed104d is in state STARTED 2026-04-10 01:04:41.531142 | orchestrator | 2026-04-10 01:04:41 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:04:41.531206 | orchestrator | 2026-04-10 01:04:41 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:04:44.565121 | orchestrator | 2026-04-10 01:04:44 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:04:44.568730 | orchestrator | 2026-04-10 01:04:44 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:04:44.569416 | orchestrator | 2026-04-10 01:04:44 | INFO  | Task 76a08406-1f2e-47f6-8193-869512ed104d is in state SUCCESS 2026-04-10 01:04:44.570665 | orchestrator | 2026-04-10 01:04:44 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:04:44.573007 | orchestrator | 2026-04-10 01:04:44 | INFO  | Task 0515b70b-3e4f-4ba6-b94f-5ede9b4121c9 is in state STARTED 2026-04-10 01:04:44.573404 | orchestrator | 2026-04-10 01:04:44 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:04:47.613392 | orchestrator | 2026-04-10 01:04:47 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:04:47.613514 | orchestrator | 2026-04-10 01:04:47 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:04:47.615722 | orchestrator | 2026-04-10 01:04:47 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:04:47.616389 | orchestrator | 2026-04-10 01:04:47 | INFO  | Task 0515b70b-3e4f-4ba6-b94f-5ede9b4121c9 is in state STARTED 2026-04-10 01:04:47.617285 | orchestrator | 2026-04-10 01:04:47 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:04:50.652814 | orchestrator | 2026-04-10 01:04:50 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:04:50.654639 | orchestrator | 2026-04-10 01:04:50 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:04:50.656312 | orchestrator | 2026-04-10 01:04:50 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:04:50.657340 | orchestrator | 2026-04-10 01:04:50 | INFO  | Task 0515b70b-3e4f-4ba6-b94f-5ede9b4121c9 is in state STARTED 2026-04-10 01:04:50.657794 | orchestrator | 2026-04-10 01:04:50 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:04:53.708457 | orchestrator | 2026-04-10 01:04:53 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:04:53.711187 | orchestrator | 2026-04-10 01:04:53 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:04:53.713704 | orchestrator | 2026-04-10 01:04:53 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:04:53.715161 | orchestrator | 2026-04-10 01:04:53 | INFO  | Task 0515b70b-3e4f-4ba6-b94f-5ede9b4121c9 is in state STARTED 2026-04-10 01:04:53.715208 | orchestrator | 2026-04-10 01:04:53 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:04:56.769024 | orchestrator | 2026-04-10 01:04:56 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:04:56.769534 | orchestrator | 2026-04-10 01:04:56 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:04:56.771016 | orchestrator | 2026-04-10 01:04:56 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:04:56.771703 | orchestrator | 2026-04-10 01:04:56 | INFO  | Task 0515b70b-3e4f-4ba6-b94f-5ede9b4121c9 is in state STARTED 2026-04-10 01:04:56.771731 | orchestrator | 2026-04-10 01:04:56 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:04:59.817713 | orchestrator | 2026-04-10 01:04:59 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:04:59.818032 | orchestrator | 2026-04-10 01:04:59 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:04:59.818768 | orchestrator | 2026-04-10 01:04:59 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:04:59.819709 | orchestrator | 2026-04-10 01:04:59 | INFO  | Task 0515b70b-3e4f-4ba6-b94f-5ede9b4121c9 is in state STARTED 2026-04-10 01:04:59.819734 | orchestrator | 2026-04-10 01:04:59 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:05:02.859937 | orchestrator | 2026-04-10 01:05:02 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:05:02.860194 | orchestrator | 2026-04-10 01:05:02 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:05:02.861868 | orchestrator | 2026-04-10 01:05:02 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:05:02.862425 | orchestrator | 2026-04-10 01:05:02 | INFO  | Task 0515b70b-3e4f-4ba6-b94f-5ede9b4121c9 is in state STARTED 2026-04-10 01:05:02.862455 | orchestrator | 2026-04-10 01:05:02 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:05:05.886306 | orchestrator | 2026-04-10 01:05:05 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:05:05.886747 | orchestrator | 2026-04-10 01:05:05 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:05:05.887416 | orchestrator | 2026-04-10 01:05:05 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:05:05.888140 | orchestrator | 2026-04-10 01:05:05 | INFO  | Task 0515b70b-3e4f-4ba6-b94f-5ede9b4121c9 is in state STARTED 2026-04-10 01:05:05.888168 | orchestrator | 2026-04-10 01:05:05 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:05:08.932165 | orchestrator | 2026-04-10 01:05:08 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:05:08.933253 | orchestrator | 2026-04-10 01:05:08 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:05:08.934558 | orchestrator | 2026-04-10 01:05:08 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state STARTED 2026-04-10 01:05:08.935591 | orchestrator | 2026-04-10 01:05:08 | INFO  | Task 0515b70b-3e4f-4ba6-b94f-5ede9b4121c9 is in state STARTED 2026-04-10 01:05:08.935622 | orchestrator | 2026-04-10 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:05:11.980940 | orchestrator | 2026-04-10 01:05:11 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:05:11.983337 | orchestrator | 2026-04-10 01:05:11 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:05:11.984914 | orchestrator | 2026-04-10 01:05:11 | INFO  | Task 2f5acc0c-f5ea-4237-9241-dca07398d15e is in state SUCCESS 2026-04-10 01:05:11.986591 | orchestrator | 2026-04-10 01:05:11.986642 | orchestrator | 2026-04-10 01:05:11.986649 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-04-10 01:05:11.986657 | orchestrator | 2026-04-10 01:05:11.986663 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-04-10 01:05:11.986669 | orchestrator | Friday 10 April 2026 01:03:57 +0000 (0:00:00.108) 0:00:00.108 ********** 2026-04-10 01:05:11.986674 | orchestrator | changed: [localhost] 2026-04-10 01:05:11.986680 | orchestrator | 2026-04-10 01:05:11.986685 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-04-10 01:05:11.986691 | orchestrator | Friday 10 April 2026 01:03:58 +0000 (0:00:00.932) 0:00:01.041 ********** 2026-04-10 01:05:11.986696 | orchestrator | changed: [localhost] 2026-04-10 01:05:11.986701 | orchestrator | 2026-04-10 01:05:11.986706 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-04-10 01:05:11.986711 | orchestrator | Friday 10 April 2026 01:04:35 +0000 (0:00:37.158) 0:00:38.200 ********** 2026-04-10 01:05:11.986740 | orchestrator | changed: [localhost] 2026-04-10 01:05:11.986746 | orchestrator | 2026-04-10 01:05:11.986752 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-10 01:05:11.986757 | orchestrator | 2026-04-10 01:05:11.986762 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-10 01:05:11.986767 | orchestrator | Friday 10 April 2026 01:04:41 +0000 (0:00:05.444) 0:00:43.644 ********** 2026-04-10 01:05:11.986773 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:05:11.987261 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:05:11.987278 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:05:11.987283 | orchestrator | 2026-04-10 01:05:11.987290 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-10 01:05:11.987296 | orchestrator | Friday 10 April 2026 01:04:41 +0000 (0:00:00.261) 0:00:43.906 ********** 2026-04-10 01:05:11.987302 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-04-10 01:05:11.987308 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-04-10 01:05:11.987313 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-04-10 01:05:11.987318 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-04-10 01:05:11.987324 | orchestrator | 2026-04-10 01:05:11.987330 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-04-10 01:05:11.987335 | orchestrator | skipping: no hosts matched 2026-04-10 01:05:11.987340 | orchestrator | 2026-04-10 01:05:11.987345 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 01:05:11.987351 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 01:05:11.987359 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 01:05:11.987366 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 01:05:11.987371 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 01:05:11.987376 | orchestrator | 2026-04-10 01:05:11.987381 | orchestrator | 2026-04-10 01:05:11.987386 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 01:05:11.987392 | orchestrator | Friday 10 April 2026 01:04:41 +0000 (0:00:00.385) 0:00:44.292 ********** 2026-04-10 01:05:11.987397 | orchestrator | =============================================================================== 2026-04-10 01:05:11.987402 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 37.16s 2026-04-10 01:05:11.987408 | orchestrator | Download ironic-agent kernel -------------------------------------------- 5.44s 2026-04-10 01:05:11.987413 | orchestrator | Ensure the destination directory exists --------------------------------- 0.93s 2026-04-10 01:05:11.987418 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.39s 2026-04-10 01:05:11.987423 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.26s 2026-04-10 01:05:11.987428 | orchestrator | 2026-04-10 01:05:11.987433 | orchestrator | 2026-04-10 01:05:11.987439 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-10 01:05:11.987444 | orchestrator | 2026-04-10 01:05:11.987449 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-10 01:05:11.987455 | orchestrator | Friday 10 April 2026 01:02:14 +0000 (0:00:00.339) 0:00:00.339 ********** 2026-04-10 01:05:11.987460 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:05:11.987465 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:05:11.987471 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:05:11.987476 | orchestrator | 2026-04-10 01:05:11.987481 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-10 01:05:11.987495 | orchestrator | Friday 10 April 2026 01:02:15 +0000 (0:00:00.707) 0:00:01.046 ********** 2026-04-10 01:05:11.987509 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-04-10 01:05:11.987514 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-04-10 01:05:11.987520 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-04-10 01:05:11.987526 | orchestrator | 2026-04-10 01:05:11.987531 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-04-10 01:05:11.987552 | orchestrator | 2026-04-10 01:05:11.987558 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-10 01:05:11.987563 | orchestrator | Friday 10 April 2026 01:02:16 +0000 (0:00:00.674) 0:00:01.720 ********** 2026-04-10 01:05:11.987568 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 01:05:11.987573 | orchestrator | 2026-04-10 01:05:11.987578 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-04-10 01:05:11.987584 | orchestrator | Friday 10 April 2026 01:02:16 +0000 (0:00:00.630) 0:00:02.351 ********** 2026-04-10 01:05:11.987600 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-04-10 01:05:11.987606 | orchestrator | 2026-04-10 01:05:11.987611 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-04-10 01:05:11.987616 | orchestrator | Friday 10 April 2026 01:02:20 +0000 (0:00:03.946) 0:00:06.298 ********** 2026-04-10 01:05:11.987621 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-04-10 01:05:11.987627 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-04-10 01:05:11.987632 | orchestrator | 2026-04-10 01:05:11.987638 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-04-10 01:05:11.987643 | orchestrator | Friday 10 April 2026 01:02:26 +0000 (0:00:06.012) 0:00:12.310 ********** 2026-04-10 01:05:11.987648 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-10 01:05:11.987654 | orchestrator | 2026-04-10 01:05:11.987659 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-04-10 01:05:11.987664 | orchestrator | Friday 10 April 2026 01:02:30 +0000 (0:00:03.640) 0:00:15.951 ********** 2026-04-10 01:05:11.987669 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-04-10 01:05:11.987674 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-10 01:05:11.987679 | orchestrator | 2026-04-10 01:05:11.987685 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-04-10 01:05:11.987690 | orchestrator | Friday 10 April 2026 01:02:34 +0000 (0:00:04.473) 0:00:20.424 ********** 2026-04-10 01:05:11.987695 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-10 01:05:11.987701 | orchestrator | 2026-04-10 01:05:11.987711 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-04-10 01:05:11.987717 | orchestrator | Friday 10 April 2026 01:02:38 +0000 (0:00:03.125) 0:00:23.550 ********** 2026-04-10 01:05:11.987722 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-04-10 01:05:11.987727 | orchestrator | 2026-04-10 01:05:11.987733 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-04-10 01:05:11.987738 | orchestrator | Friday 10 April 2026 01:02:42 +0000 (0:00:04.694) 0:00:28.245 ********** 2026-04-10 01:05:11.987745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-10 01:05:11.987758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-10 01:05:11.988192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-10 01:05:11.988200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988316 | orchestrator | 2026-04-10 01:05:11.988319 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-04-10 01:05:11.988322 | orchestrator | Friday 10 April 2026 01:02:47 +0000 (0:00:04.719) 0:00:32.964 ********** 2026-04-10 01:05:11.988326 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:11.988329 | orchestrator | 2026-04-10 01:05:11.988332 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-04-10 01:05:11.988335 | orchestrator | Friday 10 April 2026 01:02:47 +0000 (0:00:00.281) 0:00:33.246 ********** 2026-04-10 01:05:11.988338 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:11.988342 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:11.988345 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:11.988350 | orchestrator | 2026-04-10 01:05:11.988359 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-10 01:05:11.988364 | orchestrator | Friday 10 April 2026 01:02:48 +0000 (0:00:00.698) 0:00:33.945 ********** 2026-04-10 01:05:11.988369 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 01:05:11.988375 | orchestrator | 2026-04-10 01:05:11.988379 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-04-10 01:05:11.988385 | orchestrator | Friday 10 April 2026 01:02:49 +0000 (0:00:00.611) 0:00:34.556 ********** 2026-04-10 01:05:11.988391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-10 01:05:11.988400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-10 01:05:11.988422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-10 01:05:11.988429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988576 | orchestrator | 2026-04-10 01:05:11.988581 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-04-10 01:05:11.988586 | orchestrator | Friday 10 April 2026 01:02:56 +0000 (0:00:07.182) 0:00:41.739 ********** 2026-04-10 01:05:11.988591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-10 01:05:11.988596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-10 01:05:11.988604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.988609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.988627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.988637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.988642 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:11.988648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-10 01:05:11.988652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-10 01:05:11.988658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.988665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.988687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.988698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.988704 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:11.988709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-10 01:05:11.988715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-10 01:05:11.988721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.988729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.988747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.988754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.988757 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:11.988760 | orchestrator | 2026-04-10 01:05:11.988764 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-04-10 01:05:11.988767 | orchestrator | Friday 10 April 2026 01:02:57 +0000 (0:00:01.398) 0:00:43.138 ********** 2026-04-10 01:05:11.988770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-10 01:05:11.988773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-10 01:05:11.988777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.988782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.988794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.988800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.988804 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:11.988807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-10 01:05:11.988810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-10 01:05:11.988814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.988819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.988838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.988849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.988855 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:11.988860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-10 01:05:11.988866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-10 01:05:11.988871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.988881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.988884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.988902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.988906 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:11.988909 | orchestrator | 2026-04-10 01:05:11.988912 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-04-10 01:05:11.988915 | orchestrator | Friday 10 April 2026 01:02:59 +0000 (0:00:01.498) 0:00:44.636 ********** 2026-04-10 01:05:11.988919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-10 01:05:11.988922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-10 01:05:11.988925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-10 01:05:11.988950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.988997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989011 | orchestrator | 2026-04-10 01:05:11.989024 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-04-10 01:05:11.989028 | orchestrator | Friday 10 April 2026 01:03:06 +0000 (0:00:07.729) 0:00:52.366 ********** 2026-04-10 01:05:11.989031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-10 01:05:11.989034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-10 01:05:11.989038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-10 01:05:11.989045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989141 | orchestrator | 2026-04-10 01:05:11.989146 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-04-10 01:05:11.989151 | orchestrator | Friday 10 April 2026 01:03:27 +0000 (0:00:20.151) 0:01:12.517 ********** 2026-04-10 01:05:11.989157 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-10 01:05:11.989163 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-10 01:05:11.989168 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-10 01:05:11.989173 | orchestrator | 2026-04-10 01:05:11.989181 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-04-10 01:05:11.989186 | orchestrator | Friday 10 April 2026 01:03:32 +0000 (0:00:05.092) 0:01:17.610 ********** 2026-04-10 01:05:11.989191 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-10 01:05:11.989196 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-10 01:05:11.989201 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-10 01:05:11.989207 | orchestrator | 2026-04-10 01:05:11.989212 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-04-10 01:05:11.989218 | orchestrator | Friday 10 April 2026 01:03:36 +0000 (0:00:04.279) 0:01:21.889 ********** 2026-04-10 01:05:11.989224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-10 01:05:11.989230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-10 01:05:11.989241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-10 01:05:11.989252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.989268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.989274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.989280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.989298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.989304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.989313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.989323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.989329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.989334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989353 | orchestrator | 2026-04-10 01:05:11.989359 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-04-10 01:05:11.989362 | orchestrator | Friday 10 April 2026 01:03:40 +0000 (0:00:03.614) 0:01:25.504 ********** 2026-04-10 01:05:11.989365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-10 01:05:11.989369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-10 01:05:11.989375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-10 01:05:11.989382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.989390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.989394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.989400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.989406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.989416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.989420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.989423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.989429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.989432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989449 | orchestrator | 2026-04-10 01:05:11.989455 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-10 01:05:11.989459 | orchestrator | Friday 10 April 2026 01:03:43 +0000 (0:00:03.301) 0:01:28.805 ********** 2026-04-10 01:05:11.989462 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:11.989465 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:11.989468 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:11.989472 | orchestrator | 2026-04-10 01:05:11.989475 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-04-10 01:05:11.989478 | orchestrator | Friday 10 April 2026 01:03:43 +0000 (0:00:00.262) 0:01:29.068 ********** 2026-04-10 01:05:11.989484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-10 01:05:11.989490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-10 01:05:11.989493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-10 01:05:11.989497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-10 01:05:11.989502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.989507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.989510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.989516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.989519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.989522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.989526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.989529 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:11.989547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.989553 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:11.989563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-10 01:05:11.989571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-10 01:05:11.989574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.989577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.989581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.989589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-10 01:05:11.989593 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:11.989599 | orchestrator | 2026-04-10 01:05:11.989603 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-04-10 01:05:11.989608 | orchestrator | Friday 10 April 2026 01:03:44 +0000 (0:00:00.998) 0:01:30.066 ********** 2026-04-10 01:05:11.989616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-10 01:05:11.989624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-10 01:05:11.989629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-10 01:05:11.989635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:05:11.989735 | orchestrator | 2026-04-10 01:05:11.989740 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-10 01:05:11.989746 | orchestrator | Friday 10 April 2026 01:03:49 +0000 (0:00:04.982) 0:01:35.049 ********** 2026-04-10 01:05:11.989755 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:11.989761 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:11.989767 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:11.989772 | orchestrator | 2026-04-10 01:05:11.989781 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-04-10 01:05:11.989787 | orchestrator | Friday 10 April 2026 01:03:50 +0000 (0:00:00.399) 0:01:35.448 ********** 2026-04-10 01:05:11.989792 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-04-10 01:05:11.989797 | orchestrator | 2026-04-10 01:05:11.989802 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-04-10 01:05:11.989807 | orchestrator | Friday 10 April 2026 01:03:52 +0000 (0:00:02.331) 0:01:37.780 ********** 2026-04-10 01:05:11.989813 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-10 01:05:11.989818 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-04-10 01:05:11.989823 | orchestrator | 2026-04-10 01:05:11.989829 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-04-10 01:05:11.989834 | orchestrator | Friday 10 April 2026 01:03:54 +0000 (0:00:02.147) 0:01:39.927 ********** 2026-04-10 01:05:11.989840 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:05:11.989845 | orchestrator | 2026-04-10 01:05:11.989850 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-10 01:05:11.989860 | orchestrator | Friday 10 April 2026 01:04:11 +0000 (0:00:16.633) 0:01:56.560 ********** 2026-04-10 01:05:11.989863 | orchestrator | 2026-04-10 01:05:11.989866 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-10 01:05:11.989870 | orchestrator | Friday 10 April 2026 01:04:11 +0000 (0:00:00.068) 0:01:56.629 ********** 2026-04-10 01:05:11.989873 | orchestrator | 2026-04-10 01:05:11.989876 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-10 01:05:11.989881 | orchestrator | Friday 10 April 2026 01:04:11 +0000 (0:00:00.094) 0:01:56.723 ********** 2026-04-10 01:05:11.989887 | orchestrator | 2026-04-10 01:05:11.989893 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-04-10 01:05:11.989900 | orchestrator | Friday 10 April 2026 01:04:11 +0000 (0:00:00.071) 0:01:56.795 ********** 2026-04-10 01:05:11.989903 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:05:11.989906 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:05:11.989910 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:05:11.989914 | orchestrator | 2026-04-10 01:05:11.989919 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-04-10 01:05:11.989924 | orchestrator | Friday 10 April 2026 01:04:19 +0000 (0:00:08.583) 0:02:05.379 ********** 2026-04-10 01:05:11.989930 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:05:11.989935 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:05:11.989941 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:05:11.989944 | orchestrator | 2026-04-10 01:05:11.989947 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-04-10 01:05:11.989950 | orchestrator | Friday 10 April 2026 01:04:28 +0000 (0:00:09.036) 0:02:14.415 ********** 2026-04-10 01:05:11.989953 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:05:11.989956 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:05:11.989959 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:05:11.989963 | orchestrator | 2026-04-10 01:05:11.989966 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-04-10 01:05:11.989969 | orchestrator | Friday 10 April 2026 01:04:37 +0000 (0:00:08.157) 0:02:22.573 ********** 2026-04-10 01:05:11.989973 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:05:11.989976 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:05:11.989979 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:05:11.989982 | orchestrator | 2026-04-10 01:05:11.989985 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-04-10 01:05:11.989988 | orchestrator | Friday 10 April 2026 01:04:49 +0000 (0:00:12.035) 0:02:34.608 ********** 2026-04-10 01:05:11.989995 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:05:11.989998 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:05:11.990001 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:05:11.990005 | orchestrator | 2026-04-10 01:05:11.990008 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-04-10 01:05:11.990011 | orchestrator | Friday 10 April 2026 01:04:55 +0000 (0:00:06.793) 0:02:41.401 ********** 2026-04-10 01:05:11.990051 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:05:11.990057 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:05:11.990062 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:05:11.990068 | orchestrator | 2026-04-10 01:05:11.990073 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-04-10 01:05:11.990079 | orchestrator | Friday 10 April 2026 01:05:03 +0000 (0:00:07.436) 0:02:48.838 ********** 2026-04-10 01:05:11.990084 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:05:11.990090 | orchestrator | 2026-04-10 01:05:11.990095 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 01:05:11.990101 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-10 01:05:11.990105 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-10 01:05:11.990108 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-10 01:05:11.990111 | orchestrator | 2026-04-10 01:05:11.990114 | orchestrator | 2026-04-10 01:05:11.990118 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 01:05:11.990121 | orchestrator | Friday 10 April 2026 01:05:10 +0000 (0:00:07.366) 0:02:56.205 ********** 2026-04-10 01:05:11.990124 | orchestrator | =============================================================================== 2026-04-10 01:05:11.990127 | orchestrator | designate : Copying over designate.conf -------------------------------- 20.15s 2026-04-10 01:05:11.990130 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.63s 2026-04-10 01:05:11.990133 | orchestrator | designate : Restart designate-producer container ----------------------- 12.04s 2026-04-10 01:05:11.990139 | orchestrator | designate : Restart designate-api container ----------------------------- 9.04s 2026-04-10 01:05:11.990142 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 8.58s 2026-04-10 01:05:11.990148 | orchestrator | designate : Restart designate-central container ------------------------- 8.16s 2026-04-10 01:05:11.990154 | orchestrator | designate : Copying over config.json files for services ----------------- 7.73s 2026-04-10 01:05:11.990159 | orchestrator | designate : Restart designate-worker container -------------------------- 7.44s 2026-04-10 01:05:11.990164 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.37s 2026-04-10 01:05:11.990170 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.18s 2026-04-10 01:05:11.990175 | orchestrator | designate : Restart designate-mdns container ---------------------------- 6.79s 2026-04-10 01:05:11.990180 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.01s 2026-04-10 01:05:11.990185 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 5.09s 2026-04-10 01:05:11.990195 | orchestrator | designate : Check designate containers ---------------------------------- 4.98s 2026-04-10 01:05:11.990201 | orchestrator | designate : Ensuring config directories exist --------------------------- 4.72s 2026-04-10 01:05:11.990206 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.69s 2026-04-10 01:05:11.990211 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.47s 2026-04-10 01:05:11.990214 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.28s 2026-04-10 01:05:11.990217 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.95s 2026-04-10 01:05:11.990226 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.64s 2026-04-10 01:05:11.990229 | orchestrator | 2026-04-10 01:05:11 | INFO  | Task 0515b70b-3e4f-4ba6-b94f-5ede9b4121c9 is in state STARTED 2026-04-10 01:05:11.990233 | orchestrator | 2026-04-10 01:05:11 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:05:15.039668 | orchestrator | 2026-04-10 01:05:15 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:05:15.040267 | orchestrator | 2026-04-10 01:05:15 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:05:15.041439 | orchestrator | 2026-04-10 01:05:15 | INFO  | Task 85995e83-1c53-463e-8b5b-2179dc11a94c is in state STARTED 2026-04-10 01:05:15.043060 | orchestrator | 2026-04-10 01:05:15 | INFO  | Task 0515b70b-3e4f-4ba6-b94f-5ede9b4121c9 is in state STARTED 2026-04-10 01:05:15.043095 | orchestrator | 2026-04-10 01:05:15 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:05:18.074379 | orchestrator | 2026-04-10 01:05:18 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:05:18.074440 | orchestrator | 2026-04-10 01:05:18 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:05:18.075127 | orchestrator | 2026-04-10 01:05:18 | INFO  | Task 85995e83-1c53-463e-8b5b-2179dc11a94c is in state STARTED 2026-04-10 01:05:18.075773 | orchestrator | 2026-04-10 01:05:18 | INFO  | Task 0515b70b-3e4f-4ba6-b94f-5ede9b4121c9 is in state STARTED 2026-04-10 01:05:18.075840 | orchestrator | 2026-04-10 01:05:18 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:05:21.102455 | orchestrator | 2026-04-10 01:05:21 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:05:21.102869 | orchestrator | 2026-04-10 01:05:21 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:05:21.103591 | orchestrator | 2026-04-10 01:05:21 | INFO  | Task 85995e83-1c53-463e-8b5b-2179dc11a94c is in state STARTED 2026-04-10 01:05:21.104446 | orchestrator | 2026-04-10 01:05:21 | INFO  | Task 0515b70b-3e4f-4ba6-b94f-5ede9b4121c9 is in state STARTED 2026-04-10 01:05:21.104486 | orchestrator | 2026-04-10 01:05:21 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:05:24.162089 | orchestrator | 2026-04-10 01:05:24 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:05:24.162494 | orchestrator | 2026-04-10 01:05:24 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:05:24.163176 | orchestrator | 2026-04-10 01:05:24 | INFO  | Task 85995e83-1c53-463e-8b5b-2179dc11a94c is in state STARTED 2026-04-10 01:05:24.163821 | orchestrator | 2026-04-10 01:05:24 | INFO  | Task 0515b70b-3e4f-4ba6-b94f-5ede9b4121c9 is in state STARTED 2026-04-10 01:05:24.163847 | orchestrator | 2026-04-10 01:05:24 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:05:27.192135 | orchestrator | 2026-04-10 01:05:27 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:05:27.193128 | orchestrator | 2026-04-10 01:05:27 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:05:27.193895 | orchestrator | 2026-04-10 01:05:27 | INFO  | Task 85995e83-1c53-463e-8b5b-2179dc11a94c is in state STARTED 2026-04-10 01:05:27.195332 | orchestrator | 2026-04-10 01:05:27 | INFO  | Task 0515b70b-3e4f-4ba6-b94f-5ede9b4121c9 is in state STARTED 2026-04-10 01:05:27.195554 | orchestrator | 2026-04-10 01:05:27 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:05:30.256037 | orchestrator | 2026-04-10 01:05:30 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:05:30.279885 | orchestrator | 2026-04-10 01:05:30 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:05:30.280227 | orchestrator | 2026-04-10 01:05:30 | INFO  | Task 85995e83-1c53-463e-8b5b-2179dc11a94c is in state STARTED 2026-04-10 01:05:30.281398 | orchestrator | 2026-04-10 01:05:30 | INFO  | Task 0515b70b-3e4f-4ba6-b94f-5ede9b4121c9 is in state STARTED 2026-04-10 01:05:30.281437 | orchestrator | 2026-04-10 01:05:30 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:05:33.319932 | orchestrator | 2026-04-10 01:05:33 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:05:33.321232 | orchestrator | 2026-04-10 01:05:33 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:05:33.323159 | orchestrator | 2026-04-10 01:05:33 | INFO  | Task 85995e83-1c53-463e-8b5b-2179dc11a94c is in state STARTED 2026-04-10 01:05:33.325400 | orchestrator | 2026-04-10 01:05:33 | INFO  | Task 0515b70b-3e4f-4ba6-b94f-5ede9b4121c9 is in state STARTED 2026-04-10 01:05:33.325928 | orchestrator | 2026-04-10 01:05:33 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:05:36.359380 | orchestrator | 2026-04-10 01:05:36 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:05:36.363076 | orchestrator | 2026-04-10 01:05:36 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:05:36.365758 | orchestrator | 2026-04-10 01:05:36 | INFO  | Task 85995e83-1c53-463e-8b5b-2179dc11a94c is in state STARTED 2026-04-10 01:05:36.367946 | orchestrator | 2026-04-10 01:05:36 | INFO  | Task 0515b70b-3e4f-4ba6-b94f-5ede9b4121c9 is in state STARTED 2026-04-10 01:05:36.368302 | orchestrator | 2026-04-10 01:05:36 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:05:39.415690 | orchestrator | 2026-04-10 01:05:39 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:05:39.417139 | orchestrator | 2026-04-10 01:05:39 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:05:39.418923 | orchestrator | 2026-04-10 01:05:39 | INFO  | Task 85995e83-1c53-463e-8b5b-2179dc11a94c is in state STARTED 2026-04-10 01:05:39.420878 | orchestrator | 2026-04-10 01:05:39 | INFO  | Task 0515b70b-3e4f-4ba6-b94f-5ede9b4121c9 is in state STARTED 2026-04-10 01:05:39.420915 | orchestrator | 2026-04-10 01:05:39 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:05:42.462757 | orchestrator | 2026-04-10 01:05:42 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:05:42.464433 | orchestrator | 2026-04-10 01:05:42 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:05:42.465712 | orchestrator | 2026-04-10 01:05:42 | INFO  | Task 85995e83-1c53-463e-8b5b-2179dc11a94c is in state STARTED 2026-04-10 01:05:42.467256 | orchestrator | 2026-04-10 01:05:42 | INFO  | Task 0515b70b-3e4f-4ba6-b94f-5ede9b4121c9 is in state STARTED 2026-04-10 01:05:42.467288 | orchestrator | 2026-04-10 01:05:42 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:05:45.508718 | orchestrator | 2026-04-10 01:05:45 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state STARTED 2026-04-10 01:05:45.510285 | orchestrator | 2026-04-10 01:05:45 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:05:45.511361 | orchestrator | 2026-04-10 01:05:45 | INFO  | Task 85995e83-1c53-463e-8b5b-2179dc11a94c is in state STARTED 2026-04-10 01:05:45.513946 | orchestrator | 2026-04-10 01:05:45 | INFO  | Task 0515b70b-3e4f-4ba6-b94f-5ede9b4121c9 is in state STARTED 2026-04-10 01:05:45.513988 | orchestrator | 2026-04-10 01:05:45 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:05:48.542517 | orchestrator | 2026-04-10 01:05:48 | INFO  | Task dceb0686-79a9-4316-a8a7-1026d26636db is in state SUCCESS 2026-04-10 01:05:48.544011 | orchestrator | 2026-04-10 01:05:48.544069 | orchestrator | 2026-04-10 01:05:48.544078 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-10 01:05:48.544086 | orchestrator | 2026-04-10 01:05:48.544093 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-10 01:05:48.544109 | orchestrator | Friday 10 April 2026 01:01:30 +0000 (0:00:00.245) 0:00:00.245 ********** 2026-04-10 01:05:48.544116 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:05:48.544122 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:05:48.544128 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:05:48.544134 | orchestrator | ok: [testbed-node-3] 2026-04-10 01:05:48.544184 | orchestrator | ok: [testbed-node-4] 2026-04-10 01:05:48.544192 | orchestrator | ok: [testbed-node-5] 2026-04-10 01:05:48.544198 | orchestrator | 2026-04-10 01:05:48.544204 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-10 01:05:48.544210 | orchestrator | Friday 10 April 2026 01:01:30 +0000 (0:00:00.518) 0:00:00.763 ********** 2026-04-10 01:05:48.544215 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-04-10 01:05:48.544221 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-04-10 01:05:48.544227 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-04-10 01:05:48.544232 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-04-10 01:05:48.544238 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-04-10 01:05:48.544245 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-04-10 01:05:48.544251 | orchestrator | 2026-04-10 01:05:48.544257 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-04-10 01:05:48.544333 | orchestrator | 2026-04-10 01:05:48.544750 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-10 01:05:48.544769 | orchestrator | Friday 10 April 2026 01:01:31 +0000 (0:00:00.539) 0:00:01.303 ********** 2026-04-10 01:05:48.544775 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 01:05:48.544780 | orchestrator | 2026-04-10 01:05:48.544784 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-04-10 01:05:48.544788 | orchestrator | Friday 10 April 2026 01:01:32 +0000 (0:00:01.078) 0:00:02.382 ********** 2026-04-10 01:05:48.544792 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:05:48.544796 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:05:48.544800 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:05:48.544804 | orchestrator | ok: [testbed-node-3] 2026-04-10 01:05:48.544808 | orchestrator | ok: [testbed-node-4] 2026-04-10 01:05:48.544837 | orchestrator | ok: [testbed-node-5] 2026-04-10 01:05:48.544842 | orchestrator | 2026-04-10 01:05:48.544846 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-04-10 01:05:48.544850 | orchestrator | Friday 10 April 2026 01:01:33 +0000 (0:00:01.618) 0:00:04.000 ********** 2026-04-10 01:05:48.544854 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:05:48.544864 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:05:48.544868 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:05:48.544872 | orchestrator | ok: [testbed-node-3] 2026-04-10 01:05:48.544875 | orchestrator | ok: [testbed-node-4] 2026-04-10 01:05:48.544879 | orchestrator | ok: [testbed-node-5] 2026-04-10 01:05:48.544883 | orchestrator | 2026-04-10 01:05:48.544886 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-04-10 01:05:48.544890 | orchestrator | Friday 10 April 2026 01:01:35 +0000 (0:00:01.256) 0:00:05.257 ********** 2026-04-10 01:05:48.544894 | orchestrator | ok: [testbed-node-0] => { 2026-04-10 01:05:48.544898 | orchestrator |  "changed": false, 2026-04-10 01:05:48.544916 | orchestrator |  "msg": "All assertions passed" 2026-04-10 01:05:48.544923 | orchestrator | } 2026-04-10 01:05:48.544930 | orchestrator | ok: [testbed-node-1] => { 2026-04-10 01:05:48.544935 | orchestrator |  "changed": false, 2026-04-10 01:05:48.544942 | orchestrator |  "msg": "All assertions passed" 2026-04-10 01:05:48.544949 | orchestrator | } 2026-04-10 01:05:48.544956 | orchestrator | ok: [testbed-node-2] => { 2026-04-10 01:05:48.544963 | orchestrator |  "changed": false, 2026-04-10 01:05:48.544969 | orchestrator |  "msg": "All assertions passed" 2026-04-10 01:05:48.544976 | orchestrator | } 2026-04-10 01:05:48.544983 | orchestrator | ok: [testbed-node-3] => { 2026-04-10 01:05:48.544987 | orchestrator |  "changed": false, 2026-04-10 01:05:48.544991 | orchestrator |  "msg": "All assertions passed" 2026-04-10 01:05:48.544994 | orchestrator | } 2026-04-10 01:05:48.544998 | orchestrator | ok: [testbed-node-4] => { 2026-04-10 01:05:48.545002 | orchestrator |  "changed": false, 2026-04-10 01:05:48.545006 | orchestrator |  "msg": "All assertions passed" 2026-04-10 01:05:48.545009 | orchestrator | } 2026-04-10 01:05:48.545013 | orchestrator | ok: [testbed-node-5] => { 2026-04-10 01:05:48.545017 | orchestrator |  "changed": false, 2026-04-10 01:05:48.545021 | orchestrator |  "msg": "All assertions passed" 2026-04-10 01:05:48.545024 | orchestrator | } 2026-04-10 01:05:48.545028 | orchestrator | 2026-04-10 01:05:48.545032 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-04-10 01:05:48.545036 | orchestrator | Friday 10 April 2026 01:01:35 +0000 (0:00:00.460) 0:00:05.717 ********** 2026-04-10 01:05:48.545039 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:48.545043 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:48.545047 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:48.545051 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:05:48.545054 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:05:48.545058 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:05:48.545062 | orchestrator | 2026-04-10 01:05:48.545074 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-04-10 01:05:48.545078 | orchestrator | Friday 10 April 2026 01:01:36 +0000 (0:00:00.614) 0:00:06.332 ********** 2026-04-10 01:05:48.545082 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-04-10 01:05:48.545090 | orchestrator | 2026-04-10 01:05:48.545094 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-04-10 01:05:48.545098 | orchestrator | Friday 10 April 2026 01:01:40 +0000 (0:00:03.926) 0:00:10.259 ********** 2026-04-10 01:05:48.545102 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-04-10 01:05:48.545107 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-04-10 01:05:48.545111 | orchestrator | 2026-04-10 01:05:48.545140 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-04-10 01:05:48.545145 | orchestrator | Friday 10 April 2026 01:01:46 +0000 (0:00:06.218) 0:00:16.477 ********** 2026-04-10 01:05:48.545149 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-10 01:05:48.545152 | orchestrator | 2026-04-10 01:05:48.545162 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-04-10 01:05:48.545166 | orchestrator | Friday 10 April 2026 01:01:49 +0000 (0:00:03.117) 0:00:19.595 ********** 2026-04-10 01:05:48.545170 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-04-10 01:05:48.545174 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-10 01:05:48.545177 | orchestrator | 2026-04-10 01:05:48.545181 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-04-10 01:05:48.545185 | orchestrator | Friday 10 April 2026 01:01:53 +0000 (0:00:03.667) 0:00:23.263 ********** 2026-04-10 01:05:48.545189 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-10 01:05:48.545192 | orchestrator | 2026-04-10 01:05:48.545196 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-04-10 01:05:48.545204 | orchestrator | Friday 10 April 2026 01:01:56 +0000 (0:00:03.040) 0:00:26.304 ********** 2026-04-10 01:05:48.545208 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-04-10 01:05:48.545212 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-04-10 01:05:48.545216 | orchestrator | 2026-04-10 01:05:48.545220 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-10 01:05:48.545223 | orchestrator | Friday 10 April 2026 01:02:03 +0000 (0:00:07.074) 0:00:33.378 ********** 2026-04-10 01:05:48.545227 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:48.545231 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:48.545235 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:48.545239 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:05:48.545242 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:05:48.545246 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:05:48.545250 | orchestrator | 2026-04-10 01:05:48.545254 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-04-10 01:05:48.545258 | orchestrator | Friday 10 April 2026 01:02:03 +0000 (0:00:00.495) 0:00:33.874 ********** 2026-04-10 01:05:48.545261 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:48.545265 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:48.545269 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:48.545273 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:05:48.545276 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:05:48.545280 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:05:48.545284 | orchestrator | 2026-04-10 01:05:48.545287 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-04-10 01:05:48.545291 | orchestrator | Friday 10 April 2026 01:02:05 +0000 (0:00:02.094) 0:00:35.968 ********** 2026-04-10 01:05:48.545295 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:05:48.545299 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:05:48.545302 | orchestrator | ok: [testbed-node-3] 2026-04-10 01:05:48.545306 | orchestrator | ok: [testbed-node-4] 2026-04-10 01:05:48.545310 | orchestrator | ok: [testbed-node-5] 2026-04-10 01:05:48.545314 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:05:48.545317 | orchestrator | 2026-04-10 01:05:48.545321 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-10 01:05:48.545325 | orchestrator | Friday 10 April 2026 01:02:07 +0000 (0:00:01.624) 0:00:37.592 ********** 2026-04-10 01:05:48.545329 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:48.545333 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:05:48.545339 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:48.545346 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:48.545354 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:05:48.545363 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:05:48.545369 | orchestrator | 2026-04-10 01:05:48.545375 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-04-10 01:05:48.545381 | orchestrator | Friday 10 April 2026 01:02:09 +0000 (0:00:01.952) 0:00:39.545 ********** 2026-04-10 01:05:48.545389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-10 01:05:48.545422 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-10 01:05:48.545435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-10 01:05:48.545443 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-10 01:05:48.545450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-10 01:05:48.545456 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-10 01:05:48.545467 | orchestrator | 2026-04-10 01:05:48.545471 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-04-10 01:05:48.545475 | orchestrator | Friday 10 April 2026 01:02:11 +0000 (0:00:02.481) 0:00:42.026 ********** 2026-04-10 01:05:48.545479 | orchestrator | [WARNING]: Skipped 2026-04-10 01:05:48.545483 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-04-10 01:05:48.545488 | orchestrator | due to this access issue: 2026-04-10 01:05:48.545492 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-04-10 01:05:48.545511 | orchestrator | a directory 2026-04-10 01:05:48.545516 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-10 01:05:48.545521 | orchestrator | 2026-04-10 01:05:48.545525 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-10 01:05:48.545544 | orchestrator | Friday 10 April 2026 01:02:12 +0000 (0:00:00.833) 0:00:42.860 ********** 2026-04-10 01:05:48.545549 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 01:05:48.545554 | orchestrator | 2026-04-10 01:05:48.545561 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-04-10 01:05:48.545566 | orchestrator | Friday 10 April 2026 01:02:13 +0000 (0:00:01.049) 0:00:43.909 ********** 2026-04-10 01:05:48.545570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-10 01:05:48.545575 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-10 01:05:48.545580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-10 01:05:48.545585 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-10 01:05:48.545615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-10 01:05:48.545620 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-10 01:05:48.545625 | orchestrator | 2026-04-10 01:05:48.545629 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-04-10 01:05:48.545634 | orchestrator | Friday 10 April 2026 01:02:17 +0000 (0:00:03.677) 0:00:47.587 ********** 2026-04-10 01:05:48.545638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-10 01:05:48.545644 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:48.545648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-10 01:05:48.545660 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:48.545664 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-10 01:05:48.545668 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:05:48.545688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-10 01:05:48.545693 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:48.545699 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-10 01:05:48.545705 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:05:48.545711 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-10 01:05:48.545717 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:05:48.545727 | orchestrator | 2026-04-10 01:05:48.545733 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-04-10 01:05:48.545738 | orchestrator | Friday 10 April 2026 01:02:19 +0000 (0:00:02.058) 0:00:49.645 ********** 2026-04-10 01:05:48.545745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-10 01:05:48.545752 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:48.545765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-10 01:05:48.545772 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:48.545779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-10 01:05:48.545783 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:48.545787 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-10 01:05:48.545791 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:05:48.545795 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-10 01:05:48.545801 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:05:48.545805 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-10 01:05:48.545809 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:05:48.545813 | orchestrator | 2026-04-10 01:05:48.545817 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-04-10 01:05:48.545820 | orchestrator | Friday 10 April 2026 01:02:22 +0000 (0:00:02.538) 0:00:52.184 ********** 2026-04-10 01:05:48.545824 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:48.545828 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:48.545832 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:05:48.545835 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:48.545839 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:05:48.545843 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:05:48.545847 | orchestrator | 2026-04-10 01:05:48.545850 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-04-10 01:05:48.545858 | orchestrator | Friday 10 April 2026 01:02:23 +0000 (0:00:01.883) 0:00:54.067 ********** 2026-04-10 01:05:48.545862 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:48.545866 | orchestrator | 2026-04-10 01:05:48.545870 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-04-10 01:05:48.545875 | orchestrator | Friday 10 April 2026 01:02:24 +0000 (0:00:00.323) 0:00:54.392 ********** 2026-04-10 01:05:48.545879 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:48.545883 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:48.545887 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:48.545890 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:05:48.545894 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:05:48.545898 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:05:48.545902 | orchestrator | 2026-04-10 01:05:48.545906 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-04-10 01:05:48.545909 | orchestrator | Friday 10 April 2026 01:02:24 +0000 (0:00:00.486) 0:00:54.879 ********** 2026-04-10 01:05:48.545913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-10 01:05:48.545920 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:48.545924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-10 01:05:48.545928 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:48.545932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-10 01:05:48.545936 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:48.545942 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-10 01:05:48.545947 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:05:48.545953 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-10 01:05:48.545957 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:05:48.545961 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-10 01:05:48.545967 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:05:48.545971 | orchestrator | 2026-04-10 01:05:48.545975 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-04-10 01:05:48.545979 | orchestrator | Friday 10 April 2026 01:02:26 +0000 (0:00:02.005) 0:00:56.884 ********** 2026-04-10 01:05:48.545982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-10 01:05:48.545987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-10 01:05:48.546004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-10 01:05:48.546008 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-10 01:05:48.546062 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-10 01:05:48.546067 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-10 01:05:48.546071 | orchestrator | 2026-04-10 01:05:48.546074 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-04-10 01:05:48.546078 | orchestrator | Friday 10 April 2026 01:02:29 +0000 (0:00:02.730) 0:00:59.615 ********** 2026-04-10 01:05:48.546082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-10 01:05:48.546092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-10 01:05:48.546099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-10 01:05:48.546103 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-10 01:05:48.546107 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-10 01:05:48.546111 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-10 01:05:48.546115 | orchestrator | 2026-04-10 01:05:48.546119 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-04-10 01:05:48.546123 | orchestrator | Friday 10 April 2026 01:02:35 +0000 (0:00:06.127) 0:01:05.742 ********** 2026-04-10 01:05:48.546133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-10 01:05:48.546140 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:48.546144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-10 01:05:48.546148 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:48.546152 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-10 01:05:48.546156 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:05:48.546160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-10 01:05:48.546164 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:48.546168 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-10 01:05:48.546172 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:05:48.546181 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-10 01:05:48.546187 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:05:48.546191 | orchestrator | 2026-04-10 01:05:48.546195 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-04-10 01:05:48.546199 | orchestrator | Friday 10 April 2026 01:02:37 +0000 (0:00:01.927) 0:01:07.670 ********** 2026-04-10 01:05:48.546202 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:05:48.546206 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:05:48.546210 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:05:48.546214 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:05:48.546218 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:05:48.546221 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:05:48.546225 | orchestrator | 2026-04-10 01:05:48.546229 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-04-10 01:05:48.546233 | orchestrator | Friday 10 April 2026 01:02:40 +0000 (0:00:02.795) 0:01:10.465 ********** 2026-04-10 01:05:48.546237 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-10 01:05:48.546240 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:05:48.546244 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-10 01:05:48.546248 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:05:48.546252 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-10 01:05:48.546259 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:05:48.546269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-10 01:05:48.546273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-10 01:05:48.546277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-10 01:05:48.546281 | orchestrator | 2026-04-10 01:05:48.546285 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-04-10 01:05:48.546289 | orchestrator | Friday 10 April 2026 01:02:44 +0000 (0:00:04.557) 0:01:15.023 ********** 2026-04-10 01:05:48.546293 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:48.546297 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:48.546301 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:48.546305 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:05:48.546308 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:05:48.546312 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:05:48.546316 | orchestrator | 2026-04-10 01:05:48.546319 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-04-10 01:05:48.546323 | orchestrator | Friday 10 April 2026 01:02:47 +0000 (0:00:02.154) 0:01:17.177 ********** 2026-04-10 01:05:48.546327 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:48.546331 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:48.546335 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:05:48.546339 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:05:48.546345 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:48.546349 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:05:48.546353 | orchestrator | 2026-04-10 01:05:48.546356 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-04-10 01:05:48.546360 | orchestrator | Friday 10 April 2026 01:02:49 +0000 (0:00:02.252) 0:01:19.430 ********** 2026-04-10 01:05:48.546364 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:48.546368 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:48.546372 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:48.546376 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:05:48.546379 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:05:48.546383 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:05:48.546387 | orchestrator | 2026-04-10 01:05:48.546391 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-04-10 01:05:48.546394 | orchestrator | Friday 10 April 2026 01:02:51 +0000 (0:00:02.308) 0:01:21.739 ********** 2026-04-10 01:05:48.546398 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:48.546402 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:48.546406 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:05:48.546409 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:48.546413 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:05:48.546417 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:05:48.546420 | orchestrator | 2026-04-10 01:05:48.546424 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-04-10 01:05:48.546428 | orchestrator | Friday 10 April 2026 01:02:54 +0000 (0:00:02.459) 0:01:24.199 ********** 2026-04-10 01:05:48.546432 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:48.546436 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:48.546439 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:48.546443 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:05:48.546449 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:05:48.546453 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:05:48.546457 | orchestrator | 2026-04-10 01:05:48.546461 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-04-10 01:05:48.546467 | orchestrator | Friday 10 April 2026 01:02:56 +0000 (0:00:02.159) 0:01:26.358 ********** 2026-04-10 01:05:48.546471 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:48.546474 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:48.546560 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:05:48.546565 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:05:48.546569 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:48.546573 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:05:48.546576 | orchestrator | 2026-04-10 01:05:48.546580 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-04-10 01:05:48.546584 | orchestrator | Friday 10 April 2026 01:02:59 +0000 (0:00:02.820) 0:01:29.179 ********** 2026-04-10 01:05:48.546588 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-10 01:05:48.546592 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:48.546596 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-10 01:05:48.546599 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:48.546603 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-10 01:05:48.546607 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:48.546611 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-10 01:05:48.546615 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:05:48.546618 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-10 01:05:48.546622 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:05:48.546626 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-10 01:05:48.546630 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:05:48.546637 | orchestrator | 2026-04-10 01:05:48.546641 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-04-10 01:05:48.546645 | orchestrator | Friday 10 April 2026 01:03:01 +0000 (0:00:02.815) 0:01:31.995 ********** 2026-04-10 01:05:48.546649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-10 01:05:48.546653 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:48.546657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-10 01:05:48.546661 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:48.546672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-10 01:05:48.546677 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:48.546681 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-10 01:05:48.546685 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:05:48.546693 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-10 01:05:48.546697 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:05:48.546701 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-10 01:05:48.546705 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:05:48.546709 | orchestrator | 2026-04-10 01:05:48.546713 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-04-10 01:05:48.546716 | orchestrator | Friday 10 April 2026 01:03:03 +0000 (0:00:01.979) 0:01:33.974 ********** 2026-04-10 01:05:48.546720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-10 01:05:48.546724 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:48.546733 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-10 01:05:48.546737 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:05:48.546741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-10 01:05:48.546748 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:48.546752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-10 01:05:48.546756 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:48.546760 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-10 01:05:48.546764 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:05:48.546768 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-10 01:05:48.546772 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:05:48.546776 | orchestrator | 2026-04-10 01:05:48.546779 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-04-10 01:05:48.546783 | orchestrator | Friday 10 April 2026 01:03:05 +0000 (0:00:01.920) 0:01:35.895 ********** 2026-04-10 01:05:48.546787 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:48.546793 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:48.546797 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:05:48.546800 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:05:48.546805 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:48.546812 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:05:48.546820 | orchestrator | 2026-04-10 01:05:48.546831 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-04-10 01:05:48.546842 | orchestrator | Friday 10 April 2026 01:03:08 +0000 (0:00:02.488) 0:01:38.383 ********** 2026-04-10 01:05:48.546848 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:48.546854 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:48.546860 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:48.546866 | orchestrator | changed: [testbed-node-5] 2026-04-10 01:05:48.546871 | orchestrator | changed: [testbed-node-3] 2026-04-10 01:05:48.546877 | orchestrator | changed: [testbed-node-4] 2026-04-10 01:05:48.546883 | orchestrator | 2026-04-10 01:05:48.546890 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-04-10 01:05:48.546896 | orchestrator | Friday 10 April 2026 01:03:13 +0000 (0:00:04.934) 0:01:43.317 ********** 2026-04-10 01:05:48.546903 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:48.546909 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:48.546916 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:48.546921 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:05:48.546925 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:05:48.546929 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:05:48.546933 | orchestrator | 2026-04-10 01:05:48.546936 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-04-10 01:05:48.546940 | orchestrator | Friday 10 April 2026 01:03:16 +0000 (0:00:03.045) 0:01:46.363 ********** 2026-04-10 01:05:48.546944 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:05:48.546948 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:48.546951 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:48.546955 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:05:48.546959 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:48.546963 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:05:48.546966 | orchestrator | 2026-04-10 01:05:48.546970 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-04-10 01:05:48.546974 | orchestrator | Friday 10 April 2026 01:03:18 +0000 (0:00:02.323) 0:01:48.686 ********** 2026-04-10 01:05:48.546977 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:48.546981 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:48.546985 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:05:48.546989 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:05:48.546992 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:48.546996 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:05:48.547000 | orchestrator | 2026-04-10 01:05:48.547003 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-04-10 01:05:48.547007 | orchestrator | Friday 10 April 2026 01:03:21 +0000 (0:00:03.091) 0:01:51.778 ********** 2026-04-10 01:05:48.547011 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:48.547015 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:48.547019 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:48.547022 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:05:48.547026 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:05:48.547030 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:05:48.547033 | orchestrator | 2026-04-10 01:05:48.547037 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-04-10 01:05:48.547041 | orchestrator | Friday 10 April 2026 01:03:24 +0000 (0:00:03.287) 0:01:55.065 ********** 2026-04-10 01:05:48.547045 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:48.547048 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:48.547052 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:05:48.547056 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:05:48.547062 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:05:48.547068 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:48.547074 | orchestrator | 2026-04-10 01:05:48.547080 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-04-10 01:05:48.547086 | orchestrator | Friday 10 April 2026 01:03:27 +0000 (0:00:02.392) 0:01:57.458 ********** 2026-04-10 01:05:48.547092 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:48.547103 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:48.547109 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:48.547115 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:05:48.547122 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:05:48.547128 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:05:48.547135 | orchestrator | 2026-04-10 01:05:48.547142 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-04-10 01:05:48.547148 | orchestrator | Friday 10 April 2026 01:03:30 +0000 (0:00:02.840) 0:02:00.298 ********** 2026-04-10 01:05:48.547154 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:48.547160 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:05:48.547164 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:48.547168 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:48.547172 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:05:48.547175 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:05:48.547179 | orchestrator | 2026-04-10 01:05:48.547183 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-04-10 01:05:48.547187 | orchestrator | Friday 10 April 2026 01:03:33 +0000 (0:00:03.499) 0:02:03.798 ********** 2026-04-10 01:05:48.547191 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-10 01:05:48.547195 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:48.547200 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-10 01:05:48.547205 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:48.547210 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-10 01:05:48.547214 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:05:48.547218 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-10 01:05:48.547223 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:48.547231 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-10 01:05:48.547236 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:05:48.547240 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-10 01:05:48.547248 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:05:48.547252 | orchestrator | 2026-04-10 01:05:48.547257 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-04-10 01:05:48.547262 | orchestrator | Friday 10 April 2026 01:03:36 +0000 (0:00:02.424) 0:02:06.222 ********** 2026-04-10 01:05:48.547266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-10 01:05:48.547271 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:48.547276 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-10 01:05:48.547285 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:05:48.547290 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-10 01:05:48.547295 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:05:48.547299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-10 01:05:48.547304 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:48.547315 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-10 01:05:48.547319 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:05:48.547323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-10 01:05:48.547327 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:48.547333 | orchestrator | 2026-04-10 01:05:48.547337 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-04-10 01:05:48.547341 | orchestrator | Friday 10 April 2026 01:03:39 +0000 (0:00:03.255) 0:02:09.478 ********** 2026-04-10 01:05:48.547345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-10 01:05:48.547349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-10 01:05:48.547356 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-10 01:05:48.547362 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-10 01:05:48.547366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-10 01:05:48.547372 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-10 01:05:48.547376 | orchestrator | 2026-04-10 01:05:48.547380 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-10 01:05:48.547384 | orchestrator | Friday 10 April 2026 01:03:42 +0000 (0:00:03.133) 0:02:12.611 ********** 2026-04-10 01:05:48.547388 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:48.547391 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:48.547395 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:48.547399 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:05:48.547403 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:05:48.547407 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:05:48.547410 | orchestrator | 2026-04-10 01:05:48.547414 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-04-10 01:05:48.547418 | orchestrator | Friday 10 April 2026 01:03:43 +0000 (0:00:00.649) 0:02:13.261 ********** 2026-04-10 01:05:48.547422 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:05:48.547425 | orchestrator | 2026-04-10 01:05:48.547429 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-04-10 01:05:48.547433 | orchestrator | Friday 10 April 2026 01:03:45 +0000 (0:00:02.508) 0:02:15.771 ********** 2026-04-10 01:05:48.547437 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:05:48.547440 | orchestrator | 2026-04-10 01:05:48.547444 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-04-10 01:05:48.547448 | orchestrator | Friday 10 April 2026 01:03:48 +0000 (0:00:02.698) 0:02:18.470 ********** 2026-04-10 01:05:48.547452 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:05:48.547455 | orchestrator | 2026-04-10 01:05:48.547459 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-10 01:05:48.547463 | orchestrator | Friday 10 April 2026 01:04:28 +0000 (0:00:39.665) 0:02:58.136 ********** 2026-04-10 01:05:48.547467 | orchestrator | 2026-04-10 01:05:48.547470 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-10 01:05:48.547474 | orchestrator | Friday 10 April 2026 01:04:28 +0000 (0:00:00.088) 0:02:58.224 ********** 2026-04-10 01:05:48.547478 | orchestrator | 2026-04-10 01:05:48.547481 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-10 01:05:48.547485 | orchestrator | Friday 10 April 2026 01:04:28 +0000 (0:00:00.076) 0:02:58.300 ********** 2026-04-10 01:05:48.547489 | orchestrator | 2026-04-10 01:05:48.547493 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-10 01:05:48.547526 | orchestrator | Friday 10 April 2026 01:04:28 +0000 (0:00:00.073) 0:02:58.374 ********** 2026-04-10 01:05:48.547530 | orchestrator | 2026-04-10 01:05:48.547537 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-10 01:05:48.547541 | orchestrator | Friday 10 April 2026 01:04:28 +0000 (0:00:00.080) 0:02:58.454 ********** 2026-04-10 01:05:48.547548 | orchestrator | 2026-04-10 01:05:48.547556 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-10 01:05:48.547563 | orchestrator | Friday 10 April 2026 01:04:28 +0000 (0:00:00.090) 0:02:58.544 ********** 2026-04-10 01:05:48.547572 | orchestrator | 2026-04-10 01:05:48.547579 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-04-10 01:05:48.547585 | orchestrator | Friday 10 April 2026 01:04:28 +0000 (0:00:00.069) 0:02:58.614 ********** 2026-04-10 01:05:48.547590 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:05:48.547596 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:05:48.547602 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:05:48.547608 | orchestrator | 2026-04-10 01:05:48.547614 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-04-10 01:05:48.547619 | orchestrator | Friday 10 April 2026 01:04:59 +0000 (0:00:30.804) 0:03:29.419 ********** 2026-04-10 01:05:48.547625 | orchestrator | changed: [testbed-node-4] 2026-04-10 01:05:48.547630 | orchestrator | changed: [testbed-node-3] 2026-04-10 01:05:48.547637 | orchestrator | changed: [testbed-node-5] 2026-04-10 01:05:48.547643 | orchestrator | 2026-04-10 01:05:48.547649 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 01:05:48.547656 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-10 01:05:48.547663 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-10 01:05:48.547670 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-10 01:05:48.547677 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-10 01:05:48.547684 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-10 01:05:48.547690 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-10 01:05:48.547696 | orchestrator | 2026-04-10 01:05:48.547702 | orchestrator | 2026-04-10 01:05:48.547709 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 01:05:48.547716 | orchestrator | Friday 10 April 2026 01:05:45 +0000 (0:00:45.891) 0:04:15.311 ********** 2026-04-10 01:05:48.547722 | orchestrator | =============================================================================== 2026-04-10 01:05:48.547729 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 45.89s 2026-04-10 01:05:48.547788 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 39.67s 2026-04-10 01:05:48.547793 | orchestrator | neutron : Restart neutron-server container ----------------------------- 30.81s 2026-04-10 01:05:48.547796 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.07s 2026-04-10 01:05:48.547800 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.22s 2026-04-10 01:05:48.547804 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.13s 2026-04-10 01:05:48.547808 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.93s 2026-04-10 01:05:48.547812 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.56s 2026-04-10 01:05:48.547815 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.93s 2026-04-10 01:05:48.547819 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.68s 2026-04-10 01:05:48.547823 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.67s 2026-04-10 01:05:48.547827 | orchestrator | neutron : Copying over extra ml2 plugins -------------------------------- 3.50s 2026-04-10 01:05:48.547836 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 3.29s 2026-04-10 01:05:48.547839 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 3.26s 2026-04-10 01:05:48.547843 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.13s 2026-04-10 01:05:48.547847 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.12s 2026-04-10 01:05:48.547851 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 3.09s 2026-04-10 01:05:48.547855 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 3.05s 2026-04-10 01:05:48.547858 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.04s 2026-04-10 01:05:48.547862 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 2.84s 2026-04-10 01:05:48.547866 | orchestrator | 2026-04-10 01:05:48 | INFO  | Task daad55ef-6d45-40e1-a4e2-a20a9aefe8a2 is in state STARTED 2026-04-10 01:05:48.547871 | orchestrator | 2026-04-10 01:05:48 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:05:48.547877 | orchestrator | 2026-04-10 01:05:48 | INFO  | Task 85995e83-1c53-463e-8b5b-2179dc11a94c is in state STARTED 2026-04-10 01:05:48.549039 | orchestrator | 2026-04-10 01:05:48 | INFO  | Task 0515b70b-3e4f-4ba6-b94f-5ede9b4121c9 is in state STARTED 2026-04-10 01:05:48.549321 | orchestrator | 2026-04-10 01:05:48 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:05:51.621100 | orchestrator | 2026-04-10 01:05:51 | INFO  | Task daad55ef-6d45-40e1-a4e2-a20a9aefe8a2 is in state STARTED 2026-04-10 01:05:51.621681 | orchestrator | 2026-04-10 01:05:51 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:05:51.622522 | orchestrator | 2026-04-10 01:05:51 | INFO  | Task 85995e83-1c53-463e-8b5b-2179dc11a94c is in state STARTED 2026-04-10 01:05:51.623527 | orchestrator | 2026-04-10 01:05:51 | INFO  | Task 0515b70b-3e4f-4ba6-b94f-5ede9b4121c9 is in state SUCCESS 2026-04-10 01:05:51.624902 | orchestrator | 2026-04-10 01:05:51.624923 | orchestrator | 2026-04-10 01:05:51.624928 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-10 01:05:51.624933 | orchestrator | 2026-04-10 01:05:51.624937 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-10 01:05:51.624942 | orchestrator | Friday 10 April 2026 01:04:45 +0000 (0:00:00.316) 0:00:00.316 ********** 2026-04-10 01:05:51.624946 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:05:51.624951 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:05:51.624955 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:05:51.624959 | orchestrator | 2026-04-10 01:05:51.624963 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-10 01:05:51.624968 | orchestrator | Friday 10 April 2026 01:04:45 +0000 (0:00:00.262) 0:00:00.579 ********** 2026-04-10 01:05:51.624972 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-04-10 01:05:51.624976 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-04-10 01:05:51.624980 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-04-10 01:05:51.624985 | orchestrator | 2026-04-10 01:05:51.624989 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-04-10 01:05:51.624993 | orchestrator | 2026-04-10 01:05:51.624997 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-10 01:05:51.625001 | orchestrator | Friday 10 April 2026 01:04:46 +0000 (0:00:00.312) 0:00:00.892 ********** 2026-04-10 01:05:51.625005 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 01:05:51.625010 | orchestrator | 2026-04-10 01:05:51.625014 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-04-10 01:05:51.625018 | orchestrator | Friday 10 April 2026 01:04:46 +0000 (0:00:00.676) 0:00:01.569 ********** 2026-04-10 01:05:51.625034 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-04-10 01:05:51.625039 | orchestrator | 2026-04-10 01:05:51.625043 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-04-10 01:05:51.625047 | orchestrator | Friday 10 April 2026 01:04:50 +0000 (0:00:04.254) 0:00:05.824 ********** 2026-04-10 01:05:51.625051 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-04-10 01:05:51.625056 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-04-10 01:05:51.625060 | orchestrator | 2026-04-10 01:05:51.625064 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-04-10 01:05:51.625068 | orchestrator | Friday 10 April 2026 01:04:58 +0000 (0:00:07.158) 0:00:12.983 ********** 2026-04-10 01:05:51.625072 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-10 01:05:51.625076 | orchestrator | 2026-04-10 01:05:51.625081 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-04-10 01:05:51.625085 | orchestrator | Friday 10 April 2026 01:05:01 +0000 (0:00:03.551) 0:00:16.534 ********** 2026-04-10 01:05:51.625089 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-04-10 01:05:51.625093 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-10 01:05:51.625097 | orchestrator | 2026-04-10 01:05:51.625101 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-04-10 01:05:51.625105 | orchestrator | Friday 10 April 2026 01:05:05 +0000 (0:00:03.636) 0:00:20.171 ********** 2026-04-10 01:05:51.625109 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-10 01:05:51.625113 | orchestrator | 2026-04-10 01:05:51.625117 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-04-10 01:05:51.625121 | orchestrator | Friday 10 April 2026 01:05:08 +0000 (0:00:03.124) 0:00:23.296 ********** 2026-04-10 01:05:51.625125 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-04-10 01:05:51.625129 | orchestrator | 2026-04-10 01:05:51.625133 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-10 01:05:51.625137 | orchestrator | Friday 10 April 2026 01:05:11 +0000 (0:00:03.371) 0:00:26.667 ********** 2026-04-10 01:05:51.625142 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:51.625146 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:51.625150 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:51.625154 | orchestrator | 2026-04-10 01:05:51.625158 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-04-10 01:05:51.625162 | orchestrator | Friday 10 April 2026 01:05:12 +0000 (0:00:00.325) 0:00:26.992 ********** 2026-04-10 01:05:51.625175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-10 01:05:51.625189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-10 01:05:51.625197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-10 01:05:51.625202 | orchestrator | 2026-04-10 01:05:51.625206 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-04-10 01:05:51.625210 | orchestrator | Friday 10 April 2026 01:05:14 +0000 (0:00:02.092) 0:00:29.085 ********** 2026-04-10 01:05:51.625214 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:51.625218 | orchestrator | 2026-04-10 01:05:51.625222 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-04-10 01:05:51.625226 | orchestrator | Friday 10 April 2026 01:05:14 +0000 (0:00:00.172) 0:00:29.258 ********** 2026-04-10 01:05:51.625230 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:51.625235 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:51.625239 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:51.625251 | orchestrator | 2026-04-10 01:05:51.625255 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-10 01:05:51.625259 | orchestrator | Friday 10 April 2026 01:05:14 +0000 (0:00:00.229) 0:00:29.487 ********** 2026-04-10 01:05:51.625264 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 01:05:51.625268 | orchestrator | 2026-04-10 01:05:51.625277 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-04-10 01:05:51.625281 | orchestrator | Friday 10 April 2026 01:05:15 +0000 (0:00:00.543) 0:00:30.031 ********** 2026-04-10 01:05:51.625285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-10 01:05:51.625296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-10 01:05:51.625303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-10 01:05:51.625310 | orchestrator | 2026-04-10 01:05:51.625317 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-04-10 01:05:51.625325 | orchestrator | Friday 10 April 2026 01:05:16 +0000 (0:00:01.314) 0:00:31.345 ********** 2026-04-10 01:05:51.625335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-10 01:05:51.625342 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:51.625349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-10 01:05:51.625356 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:51.625375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-10 01:05:51.625387 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:51.625393 | orchestrator | 2026-04-10 01:05:51.625399 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-04-10 01:05:51.625406 | orchestrator | Friday 10 April 2026 01:05:16 +0000 (0:00:00.440) 0:00:31.786 ********** 2026-04-10 01:05:51.625413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-10 01:05:51.625419 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:51.625426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-10 01:05:51.625433 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:51.625441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-10 01:05:51.625447 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:51.625454 | orchestrator | 2026-04-10 01:05:51.625461 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-04-10 01:05:51.625472 | orchestrator | Friday 10 April 2026 01:05:18 +0000 (0:00:01.203) 0:00:32.990 ********** 2026-04-10 01:05:51.625487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-10 01:05:51.625509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-10 01:05:51.625517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-10 01:05:51.625524 | orchestrator | 2026-04-10 01:05:51.625531 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-04-10 01:05:51.625538 | orchestrator | Friday 10 April 2026 01:05:19 +0000 (0:00:01.822) 0:00:34.813 ********** 2026-04-10 01:05:51.625545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-10 01:05:51.625569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-10 01:05:51.625578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-10 01:05:51.625582 | orchestrator | 2026-04-10 01:05:51.625586 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-04-10 01:05:51.625591 | orchestrator | Friday 10 April 2026 01:05:22 +0000 (0:00:02.361) 0:00:37.174 ********** 2026-04-10 01:05:51.625595 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-10 01:05:51.625599 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-10 01:05:51.625603 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-10 01:05:51.625607 | orchestrator | 2026-04-10 01:05:51.625611 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-04-10 01:05:51.625616 | orchestrator | Friday 10 April 2026 01:05:23 +0000 (0:00:01.589) 0:00:38.763 ********** 2026-04-10 01:05:51.625620 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:05:51.625624 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:05:51.625628 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:05:51.625632 | orchestrator | 2026-04-10 01:05:51.625636 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-04-10 01:05:51.625640 | orchestrator | Friday 10 April 2026 01:05:25 +0000 (0:00:01.512) 0:00:40.276 ********** 2026-04-10 01:05:51.625645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-10 01:05:51.625652 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:05:51.625656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-10 01:05:51.625660 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:05:51.625670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-10 01:05:51.625674 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:05:51.625678 | orchestrator | 2026-04-10 01:05:51.625683 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-04-10 01:05:51.625687 | orchestrator | Friday 10 April 2026 01:05:26 +0000 (0:00:01.001) 0:00:41.278 ********** 2026-04-10 01:05:51.625691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-10 01:05:51.625695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-10 01:05:51.625702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-10 01:05:51.625707 | orchestrator | 2026-04-10 01:05:51.625711 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-04-10 01:05:51.625715 | orchestrator | Friday 10 April 2026 01:05:27 +0000 (0:00:01.053) 0:00:42.331 ********** 2026-04-10 01:05:51.625719 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:05:51.625723 | orchestrator | 2026-04-10 01:05:51.625727 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-04-10 01:05:51.625731 | orchestrator | Friday 10 April 2026 01:05:29 +0000 (0:00:01.989) 0:00:44.321 ********** 2026-04-10 01:05:51.625735 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:05:51.625739 | orchestrator | 2026-04-10 01:05:51.625743 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-04-10 01:05:51.625748 | orchestrator | Friday 10 April 2026 01:05:31 +0000 (0:00:01.937) 0:00:46.258 ********** 2026-04-10 01:05:51.625752 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:05:51.625756 | orchestrator | 2026-04-10 01:05:51.625760 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-10 01:05:51.625764 | orchestrator | Friday 10 April 2026 01:05:43 +0000 (0:00:11.865) 0:00:58.124 ********** 2026-04-10 01:05:51.625768 | orchestrator | 2026-04-10 01:05:51.625772 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-10 01:05:51.625776 | orchestrator | Friday 10 April 2026 01:05:43 +0000 (0:00:00.062) 0:00:58.186 ********** 2026-04-10 01:05:51.625781 | orchestrator | 2026-04-10 01:05:51.625787 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-10 01:05:51.625791 | orchestrator | Friday 10 April 2026 01:05:43 +0000 (0:00:00.059) 0:00:58.245 ********** 2026-04-10 01:05:51.625795 | orchestrator | 2026-04-10 01:05:51.625799 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-04-10 01:05:51.625804 | orchestrator | Friday 10 April 2026 01:05:43 +0000 (0:00:00.060) 0:00:58.306 ********** 2026-04-10 01:05:51.625808 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:05:51.625812 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:05:51.625816 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:05:51.625820 | orchestrator | 2026-04-10 01:05:51.625824 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 01:05:51.625829 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-10 01:05:51.625834 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-10 01:05:51.625838 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-10 01:05:51.625842 | orchestrator | 2026-04-10 01:05:51.625846 | orchestrator | 2026-04-10 01:05:51.625850 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 01:05:51.625858 | orchestrator | Friday 10 April 2026 01:05:50 +0000 (0:00:07.375) 0:01:05.681 ********** 2026-04-10 01:05:51.625862 | orchestrator | =============================================================================== 2026-04-10 01:05:51.625866 | orchestrator | placement : Running placement bootstrap container ---------------------- 11.87s 2026-04-10 01:05:51.625871 | orchestrator | placement : Restart placement-api container ----------------------------- 7.38s 2026-04-10 01:05:51.625875 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.16s 2026-04-10 01:05:51.625879 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.26s 2026-04-10 01:05:51.625883 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.64s 2026-04-10 01:05:51.625887 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.55s 2026-04-10 01:05:51.625891 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.37s 2026-04-10 01:05:51.625895 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.12s 2026-04-10 01:05:51.625899 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.36s 2026-04-10 01:05:51.625903 | orchestrator | placement : Ensuring config directories exist --------------------------- 2.09s 2026-04-10 01:05:51.625908 | orchestrator | placement : Creating placement databases -------------------------------- 1.99s 2026-04-10 01:05:51.625912 | orchestrator | placement : Creating placement databases user and setting permissions --- 1.94s 2026-04-10 01:05:51.625916 | orchestrator | placement : Copying over config.json files for services ----------------- 1.82s 2026-04-10 01:05:51.625920 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.59s 2026-04-10 01:05:51.625924 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.51s 2026-04-10 01:05:51.625928 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.31s 2026-04-10 01:05:51.625932 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.20s 2026-04-10 01:05:51.625962 | orchestrator | placement : Check placement containers ---------------------------------- 1.05s 2026-04-10 01:05:51.625970 | orchestrator | placement : Copying over existing policy file --------------------------- 1.00s 2026-04-10 01:05:51.625978 | orchestrator | placement : include_tasks ----------------------------------------------- 0.68s 2026-04-10 01:05:51.625984 | orchestrator | 2026-04-10 01:05:51 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:05:54.661358 | orchestrator | 2026-04-10 01:05:54 | INFO  | Task daad55ef-6d45-40e1-a4e2-a20a9aefe8a2 is in state STARTED 2026-04-10 01:05:54.662326 | orchestrator | 2026-04-10 01:05:54 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:05:54.662386 | orchestrator | 2026-04-10 01:05:54 | INFO  | Task 85995e83-1c53-463e-8b5b-2179dc11a94c is in state STARTED 2026-04-10 01:05:54.663664 | orchestrator | 2026-04-10 01:05:54 | INFO  | Task 638722db-5ccf-4444-961a-e52490338e6c is in state STARTED 2026-04-10 01:05:54.663703 | orchestrator | 2026-04-10 01:05:54 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:05:57.700816 | orchestrator | 2026-04-10 01:05:57 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:05:57.702348 | orchestrator | 2026-04-10 01:05:57 | INFO  | Task daad55ef-6d45-40e1-a4e2-a20a9aefe8a2 is in state STARTED 2026-04-10 01:05:57.703210 | orchestrator | 2026-04-10 01:05:57 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:05:57.703939 | orchestrator | 2026-04-10 01:05:57 | INFO  | Task 85995e83-1c53-463e-8b5b-2179dc11a94c is in state STARTED 2026-04-10 01:05:57.704406 | orchestrator | 2026-04-10 01:05:57 | INFO  | Task 638722db-5ccf-4444-961a-e52490338e6c is in state SUCCESS 2026-04-10 01:05:57.704568 | orchestrator | 2026-04-10 01:05:57 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:06:00.734108 | orchestrator | 2026-04-10 01:06:00 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:06:00.734822 | orchestrator | 2026-04-10 01:06:00 | INFO  | Task daad55ef-6d45-40e1-a4e2-a20a9aefe8a2 is in state STARTED 2026-04-10 01:06:00.736095 | orchestrator | 2026-04-10 01:06:00 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:06:00.739424 | orchestrator | 2026-04-10 01:06:00 | INFO  | Task 85995e83-1c53-463e-8b5b-2179dc11a94c is in state STARTED 2026-04-10 01:06:00.739465 | orchestrator | 2026-04-10 01:06:00 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:06:03.768060 | orchestrator | 2026-04-10 01:06:03 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:06:03.768433 | orchestrator | 2026-04-10 01:06:03 | INFO  | Task daad55ef-6d45-40e1-a4e2-a20a9aefe8a2 is in state STARTED 2026-04-10 01:06:03.769201 | orchestrator | 2026-04-10 01:06:03 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:06:03.771449 | orchestrator | 2026-04-10 01:06:03 | INFO  | Task 85995e83-1c53-463e-8b5b-2179dc11a94c is in state STARTED 2026-04-10 01:06:03.771514 | orchestrator | 2026-04-10 01:06:03 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:06:06.797571 | orchestrator | 2026-04-10 01:06:06 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:06:06.798451 | orchestrator | 2026-04-10 01:06:06 | INFO  | Task daad55ef-6d45-40e1-a4e2-a20a9aefe8a2 is in state STARTED 2026-04-10 01:06:06.799575 | orchestrator | 2026-04-10 01:06:06 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:06:06.799607 | orchestrator | 2026-04-10 01:06:06 | INFO  | Task 85995e83-1c53-463e-8b5b-2179dc11a94c is in state STARTED 2026-04-10 01:06:06.799614 | orchestrator | 2026-04-10 01:06:06 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:06:09.829082 | orchestrator | 2026-04-10 01:06:09 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:06:09.829445 | orchestrator | 2026-04-10 01:06:09 | INFO  | Task daad55ef-6d45-40e1-a4e2-a20a9aefe8a2 is in state STARTED 2026-04-10 01:06:09.830243 | orchestrator | 2026-04-10 01:06:09 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:06:09.830913 | orchestrator | 2026-04-10 01:06:09 | INFO  | Task 85995e83-1c53-463e-8b5b-2179dc11a94c is in state STARTED 2026-04-10 01:06:09.831513 | orchestrator | 2026-04-10 01:06:09 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:06:12.865872 | orchestrator | 2026-04-10 01:06:12 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:06:12.866928 | orchestrator | 2026-04-10 01:06:12 | INFO  | Task daad55ef-6d45-40e1-a4e2-a20a9aefe8a2 is in state STARTED 2026-04-10 01:06:12.869771 | orchestrator | 2026-04-10 01:06:12 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:06:12.870922 | orchestrator | 2026-04-10 01:06:12 | INFO  | Task 85995e83-1c53-463e-8b5b-2179dc11a94c is in state STARTED 2026-04-10 01:06:12.870959 | orchestrator | 2026-04-10 01:06:12 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:06:15.907048 | orchestrator | 2026-04-10 01:06:15 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:06:15.909369 | orchestrator | 2026-04-10 01:06:15 | INFO  | Task daad55ef-6d45-40e1-a4e2-a20a9aefe8a2 is in state STARTED 2026-04-10 01:06:15.911714 | orchestrator | 2026-04-10 01:06:15 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:06:15.913722 | orchestrator | 2026-04-10 01:06:15 | INFO  | Task 85995e83-1c53-463e-8b5b-2179dc11a94c is in state STARTED 2026-04-10 01:06:15.913764 | orchestrator | 2026-04-10 01:06:15 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:06:18.952874 | orchestrator | 2026-04-10 01:06:18 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:06:18.955957 | orchestrator | 2026-04-10 01:06:18 | INFO  | Task daad55ef-6d45-40e1-a4e2-a20a9aefe8a2 is in state STARTED 2026-04-10 01:06:18.957746 | orchestrator | 2026-04-10 01:06:18 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:06:18.959275 | orchestrator | 2026-04-10 01:06:18 | INFO  | Task 85995e83-1c53-463e-8b5b-2179dc11a94c is in state STARTED 2026-04-10 01:06:18.959319 | orchestrator | 2026-04-10 01:06:18 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:06:22.003711 | orchestrator | 2026-04-10 01:06:22 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:06:22.005495 | orchestrator | 2026-04-10 01:06:22 | INFO  | Task daad55ef-6d45-40e1-a4e2-a20a9aefe8a2 is in state STARTED 2026-04-10 01:06:22.007333 | orchestrator | 2026-04-10 01:06:22 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:06:22.009133 | orchestrator | 2026-04-10 01:06:22 | INFO  | Task 85995e83-1c53-463e-8b5b-2179dc11a94c is in state STARTED 2026-04-10 01:06:22.009159 | orchestrator | 2026-04-10 01:06:22 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:06:25.046607 | orchestrator | 2026-04-10 01:06:25 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:06:25.047112 | orchestrator | 2026-04-10 01:06:25 | INFO  | Task daad55ef-6d45-40e1-a4e2-a20a9aefe8a2 is in state STARTED 2026-04-10 01:06:25.047895 | orchestrator | 2026-04-10 01:06:25 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:06:25.048699 | orchestrator | 2026-04-10 01:06:25 | INFO  | Task 85995e83-1c53-463e-8b5b-2179dc11a94c is in state STARTED 2026-04-10 01:06:25.048723 | orchestrator | 2026-04-10 01:06:25 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:06:28.092103 | orchestrator | 2026-04-10 01:06:28 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:06:28.092894 | orchestrator | 2026-04-10 01:06:28 | INFO  | Task daad55ef-6d45-40e1-a4e2-a20a9aefe8a2 is in state STARTED 2026-04-10 01:06:28.093560 | orchestrator | 2026-04-10 01:06:28 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:06:28.094554 | orchestrator | 2026-04-10 01:06:28 | INFO  | Task 85995e83-1c53-463e-8b5b-2179dc11a94c is in state STARTED 2026-04-10 01:06:28.094589 | orchestrator | 2026-04-10 01:06:28 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:06:31.130758 | orchestrator | 2026-04-10 01:06:31 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:06:31.131587 | orchestrator | 2026-04-10 01:06:31 | INFO  | Task daad55ef-6d45-40e1-a4e2-a20a9aefe8a2 is in state STARTED 2026-04-10 01:06:31.132263 | orchestrator | 2026-04-10 01:06:31 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:06:31.133219 | orchestrator | 2026-04-10 01:06:31 | INFO  | Task 85995e83-1c53-463e-8b5b-2179dc11a94c is in state STARTED 2026-04-10 01:06:31.133241 | orchestrator | 2026-04-10 01:06:31 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:06:34.164680 | orchestrator | 2026-04-10 01:06:34 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:06:34.164723 | orchestrator | 2026-04-10 01:06:34 | INFO  | Task daad55ef-6d45-40e1-a4e2-a20a9aefe8a2 is in state STARTED 2026-04-10 01:06:34.165652 | orchestrator | 2026-04-10 01:06:34 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:06:34.168214 | orchestrator | 2026-04-10 01:06:34 | INFO  | Task 85995e83-1c53-463e-8b5b-2179dc11a94c is in state STARTED 2026-04-10 01:06:34.168267 | orchestrator | 2026-04-10 01:06:34 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:06:37.210609 | orchestrator | 2026-04-10 01:06:37 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:06:37.212394 | orchestrator | 2026-04-10 01:06:37 | INFO  | Task daad55ef-6d45-40e1-a4e2-a20a9aefe8a2 is in state STARTED 2026-04-10 01:06:37.214279 | orchestrator | 2026-04-10 01:06:37 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:06:37.216022 | orchestrator | 2026-04-10 01:06:37 | INFO  | Task 85995e83-1c53-463e-8b5b-2179dc11a94c is in state STARTED 2026-04-10 01:06:37.216067 | orchestrator | 2026-04-10 01:06:37 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:06:40.254383 | orchestrator | 2026-04-10 01:06:40 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:06:40.254431 | orchestrator | 2026-04-10 01:06:40 | INFO  | Task daad55ef-6d45-40e1-a4e2-a20a9aefe8a2 is in state STARTED 2026-04-10 01:06:40.255366 | orchestrator | 2026-04-10 01:06:40 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:06:40.256255 | orchestrator | 2026-04-10 01:06:40 | INFO  | Task 85995e83-1c53-463e-8b5b-2179dc11a94c is in state STARTED 2026-04-10 01:06:40.256286 | orchestrator | 2026-04-10 01:06:40 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:06:43.307192 | orchestrator | 2026-04-10 01:06:43 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:06:43.308323 | orchestrator | 2026-04-10 01:06:43 | INFO  | Task daad55ef-6d45-40e1-a4e2-a20a9aefe8a2 is in state STARTED 2026-04-10 01:06:43.309395 | orchestrator | 2026-04-10 01:06:43 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:06:43.310534 | orchestrator | 2026-04-10 01:06:43 | INFO  | Task 85995e83-1c53-463e-8b5b-2179dc11a94c is in state STARTED 2026-04-10 01:06:43.310569 | orchestrator | 2026-04-10 01:06:43 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:06:46.367575 | orchestrator | 2026-04-10 01:06:46 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:06:46.369872 | orchestrator | 2026-04-10 01:06:46 | INFO  | Task daad55ef-6d45-40e1-a4e2-a20a9aefe8a2 is in state STARTED 2026-04-10 01:06:46.373267 | orchestrator | 2026-04-10 01:06:46 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:06:46.376262 | orchestrator | 2026-04-10 01:06:46 | INFO  | Task 85995e83-1c53-463e-8b5b-2179dc11a94c is in state STARTED 2026-04-10 01:06:46.377464 | orchestrator | 2026-04-10 01:06:46 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:06:49.417022 | orchestrator | 2026-04-10 01:06:49 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:06:49.420740 | orchestrator | 2026-04-10 01:06:49 | INFO  | Task daad55ef-6d45-40e1-a4e2-a20a9aefe8a2 is in state STARTED 2026-04-10 01:06:49.423913 | orchestrator | 2026-04-10 01:06:49 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state STARTED 2026-04-10 01:06:49.428284 | orchestrator | 2026-04-10 01:06:49 | INFO  | Task 85995e83-1c53-463e-8b5b-2179dc11a94c is in state STARTED 2026-04-10 01:06:49.428330 | orchestrator | 2026-04-10 01:06:49 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:06:52.474909 | orchestrator | 2026-04-10 01:06:52 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:06:52.476383 | orchestrator | 2026-04-10 01:06:52 | INFO  | Task daad55ef-6d45-40e1-a4e2-a20a9aefe8a2 is in state STARTED 2026-04-10 01:08:52.592311 | orchestrator | 2026-04-10 01:08:52.592476 | orchestrator | 2026-04-10 01:08:52.592925 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-10 01:08:52.592938 | orchestrator | 2026-04-10 01:08:52.592944 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-10 01:08:52.592950 | orchestrator | Friday 10 April 2026 01:05:54 +0000 (0:00:00.189) 0:00:00.189 ********** 2026-04-10 01:08:52.592955 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:08:52.592962 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:08:52.592967 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:08:52.592972 | orchestrator | 2026-04-10 01:08:52.592977 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-10 01:08:52.592983 | orchestrator | Friday 10 April 2026 01:05:54 +0000 (0:00:00.284) 0:00:00.474 ********** 2026-04-10 01:08:52.592988 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-04-10 01:08:52.592993 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-04-10 01:08:52.592999 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-04-10 01:08:52.593004 | orchestrator | 2026-04-10 01:08:52.593009 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-04-10 01:08:52.593015 | orchestrator | 2026-04-10 01:08:52.593021 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-04-10 01:08:52.593027 | orchestrator | Friday 10 April 2026 01:05:54 +0000 (0:00:00.432) 0:00:00.907 ********** 2026-04-10 01:08:52.593033 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:08:52.593038 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:08:52.593044 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:08:52.593050 | orchestrator | 2026-04-10 01:08:52.593056 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 01:08:52.593063 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 01:08:52.593070 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 01:08:52.593076 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 01:08:52.593082 | orchestrator | 2026-04-10 01:08:52.593088 | orchestrator | 2026-04-10 01:08:52.593094 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 01:08:52.593100 | orchestrator | Friday 10 April 2026 01:05:56 +0000 (0:00:01.325) 0:00:02.233 ********** 2026-04-10 01:08:52.593106 | orchestrator | =============================================================================== 2026-04-10 01:08:52.593112 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 1.33s 2026-04-10 01:08:52.593118 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2026-04-10 01:08:52.593124 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2026-04-10 01:08:52.593130 | orchestrator | 2026-04-10 01:08:52.593136 | orchestrator | 2026-04-10 01:08:52 | INFO  | Task a0e0d57f-dc7b-4411-8ac2-21ffba8375e1 is in state SUCCESS 2026-04-10 01:08:52.597263 | orchestrator | 2026-04-10 01:08:52.597320 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-10 01:08:52.597332 | orchestrator | 2026-04-10 01:08:52.597337 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-04-10 01:08:52.597341 | orchestrator | Friday 10 April 2026 00:59:30 +0000 (0:00:00.307) 0:00:00.307 ********** 2026-04-10 01:08:52.597345 | orchestrator | changed: [testbed-manager] 2026-04-10 01:08:52.597350 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:08:52.597353 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:08:52.597370 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:08:52.597374 | orchestrator | changed: [testbed-node-3] 2026-04-10 01:08:52.597378 | orchestrator | changed: [testbed-node-4] 2026-04-10 01:08:52.597382 | orchestrator | changed: [testbed-node-5] 2026-04-10 01:08:52.597390 | orchestrator | 2026-04-10 01:08:52.597394 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-10 01:08:52.597444 | orchestrator | Friday 10 April 2026 00:59:30 +0000 (0:00:00.614) 0:00:00.921 ********** 2026-04-10 01:08:52.597449 | orchestrator | changed: [testbed-manager] 2026-04-10 01:08:52.597452 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:08:52.597456 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:08:52.597460 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:08:52.597467 | orchestrator | changed: [testbed-node-3] 2026-04-10 01:08:52.597473 | orchestrator | changed: [testbed-node-4] 2026-04-10 01:08:52.597483 | orchestrator | changed: [testbed-node-5] 2026-04-10 01:08:52.597490 | orchestrator | 2026-04-10 01:08:52.597496 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-10 01:08:52.597502 | orchestrator | Friday 10 April 2026 00:59:31 +0000 (0:00:00.791) 0:00:01.712 ********** 2026-04-10 01:08:52.597508 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-04-10 01:08:52.597577 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-04-10 01:08:52.597583 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-04-10 01:08:52.597587 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-04-10 01:08:52.597591 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-04-10 01:08:52.597596 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-04-10 01:08:52.597602 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-04-10 01:08:52.597609 | orchestrator | 2026-04-10 01:08:52.597615 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-04-10 01:08:52.597622 | orchestrator | 2026-04-10 01:08:52.597628 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-10 01:08:52.597635 | orchestrator | Friday 10 April 2026 00:59:32 +0000 (0:00:00.600) 0:00:02.313 ********** 2026-04-10 01:08:52.597641 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 01:08:52.597647 | orchestrator | 2026-04-10 01:08:52.597652 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-04-10 01:08:52.597659 | orchestrator | Friday 10 April 2026 00:59:32 +0000 (0:00:00.612) 0:00:02.925 ********** 2026-04-10 01:08:52.597666 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-04-10 01:08:52.597673 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-04-10 01:08:52.597678 | orchestrator | 2026-04-10 01:08:52.597684 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-04-10 01:08:52.597690 | orchestrator | Friday 10 April 2026 00:59:37 +0000 (0:00:04.986) 0:00:07.912 ********** 2026-04-10 01:08:52.597695 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-10 01:08:52.597701 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-10 01:08:52.597707 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:08:52.597720 | orchestrator | 2026-04-10 01:08:52.597726 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-10 01:08:52.597733 | orchestrator | Friday 10 April 2026 00:59:42 +0000 (0:00:05.025) 0:00:12.937 ********** 2026-04-10 01:08:52.597739 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:08:52.597746 | orchestrator | 2026-04-10 01:08:52.597752 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-04-10 01:08:52.597759 | orchestrator | Friday 10 April 2026 00:59:44 +0000 (0:00:01.235) 0:00:14.173 ********** 2026-04-10 01:08:52.597764 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:08:52.597768 | orchestrator | 2026-04-10 01:08:52.597772 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-04-10 01:08:52.597782 | orchestrator | Friday 10 April 2026 00:59:45 +0000 (0:00:01.563) 0:00:15.736 ********** 2026-04-10 01:08:52.597786 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:08:52.597790 | orchestrator | 2026-04-10 01:08:52.597794 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-10 01:08:52.597797 | orchestrator | Friday 10 April 2026 00:59:49 +0000 (0:00:03.286) 0:00:19.023 ********** 2026-04-10 01:08:52.597801 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.597805 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.597809 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.597812 | orchestrator | 2026-04-10 01:08:52.597816 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-10 01:08:52.597820 | orchestrator | Friday 10 April 2026 00:59:49 +0000 (0:00:00.759) 0:00:19.782 ********** 2026-04-10 01:08:52.597823 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:08:52.597827 | orchestrator | 2026-04-10 01:08:52.597831 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-04-10 01:08:52.597835 | orchestrator | Friday 10 April 2026 01:00:26 +0000 (0:00:36.868) 0:00:56.650 ********** 2026-04-10 01:08:52.597839 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:08:52.597842 | orchestrator | 2026-04-10 01:08:52.597846 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-10 01:08:52.597850 | orchestrator | Friday 10 April 2026 01:00:44 +0000 (0:00:17.518) 0:01:14.168 ********** 2026-04-10 01:08:52.597855 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:08:52.597859 | orchestrator | 2026-04-10 01:08:52.597864 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-10 01:08:52.597870 | orchestrator | Friday 10 April 2026 01:00:57 +0000 (0:00:13.596) 0:01:27.765 ********** 2026-04-10 01:08:52.597892 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:08:52.597899 | orchestrator | 2026-04-10 01:08:52.597905 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-04-10 01:08:52.597912 | orchestrator | Friday 10 April 2026 01:00:58 +0000 (0:00:00.592) 0:01:28.358 ********** 2026-04-10 01:08:52.597919 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.597926 | orchestrator | 2026-04-10 01:08:52.597933 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-10 01:08:52.597939 | orchestrator | Friday 10 April 2026 01:00:58 +0000 (0:00:00.390) 0:01:28.749 ********** 2026-04-10 01:08:52.597946 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 01:08:52.597951 | orchestrator | 2026-04-10 01:08:52.597956 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-10 01:08:52.597960 | orchestrator | Friday 10 April 2026 01:00:59 +0000 (0:00:00.618) 0:01:29.367 ********** 2026-04-10 01:08:52.597964 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:08:52.597969 | orchestrator | 2026-04-10 01:08:52.597973 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-10 01:08:52.597977 | orchestrator | Friday 10 April 2026 01:01:20 +0000 (0:00:21.132) 0:01:50.499 ********** 2026-04-10 01:08:52.597982 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.597986 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.597991 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.597995 | orchestrator | 2026-04-10 01:08:52.597999 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-04-10 01:08:52.598004 | orchestrator | 2026-04-10 01:08:52.598008 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-10 01:08:52.598034 | orchestrator | Friday 10 April 2026 01:01:20 +0000 (0:00:00.325) 0:01:50.825 ********** 2026-04-10 01:08:52.598041 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 01:08:52.598045 | orchestrator | 2026-04-10 01:08:52.598050 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-04-10 01:08:52.598054 | orchestrator | Friday 10 April 2026 01:01:21 +0000 (0:00:00.884) 0:01:51.709 ********** 2026-04-10 01:08:52.598062 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.598067 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.598071 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:08:52.598076 | orchestrator | 2026-04-10 01:08:52.598080 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-04-10 01:08:52.598085 | orchestrator | Friday 10 April 2026 01:01:24 +0000 (0:00:02.566) 0:01:54.276 ********** 2026-04-10 01:08:52.598089 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.598093 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.598098 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:08:52.598103 | orchestrator | 2026-04-10 01:08:52.598107 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-10 01:08:52.598111 | orchestrator | Friday 10 April 2026 01:01:27 +0000 (0:00:02.728) 0:01:57.006 ********** 2026-04-10 01:08:52.598116 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.598120 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.598124 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.598129 | orchestrator | 2026-04-10 01:08:52.598133 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-10 01:08:52.598138 | orchestrator | Friday 10 April 2026 01:01:27 +0000 (0:00:00.594) 0:01:57.600 ********** 2026-04-10 01:08:52.598142 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-10 01:08:52.598147 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.598151 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-10 01:08:52.598155 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.598160 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-10 01:08:52.598164 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-04-10 01:08:52.598169 | orchestrator | 2026-04-10 01:08:52.598174 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-10 01:08:52.598178 | orchestrator | Friday 10 April 2026 01:01:37 +0000 (0:00:09.513) 0:02:07.114 ********** 2026-04-10 01:08:52.598183 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.598187 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.598191 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.598195 | orchestrator | 2026-04-10 01:08:52.598200 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-10 01:08:52.598204 | orchestrator | Friday 10 April 2026 01:01:37 +0000 (0:00:00.371) 0:02:07.485 ********** 2026-04-10 01:08:52.598208 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-10 01:08:52.598213 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.598217 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-10 01:08:52.598221 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.598225 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-10 01:08:52.598230 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.598235 | orchestrator | 2026-04-10 01:08:52.598239 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-10 01:08:52.598243 | orchestrator | Friday 10 April 2026 01:01:38 +0000 (0:00:01.078) 0:02:08.563 ********** 2026-04-10 01:08:52.598248 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.598252 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.598257 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:08:52.598261 | orchestrator | 2026-04-10 01:08:52.598266 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-04-10 01:08:52.598270 | orchestrator | Friday 10 April 2026 01:01:39 +0000 (0:00:00.619) 0:02:09.183 ********** 2026-04-10 01:08:52.598275 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.598279 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.598284 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:08:52.598288 | orchestrator | 2026-04-10 01:08:52.598293 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-04-10 01:08:52.598298 | orchestrator | Friday 10 April 2026 01:01:40 +0000 (0:00:00.943) 0:02:10.126 ********** 2026-04-10 01:08:52.598304 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.598309 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.598318 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:08:52.598324 | orchestrator | 2026-04-10 01:08:52.598334 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-04-10 01:08:52.598341 | orchestrator | Friday 10 April 2026 01:01:42 +0000 (0:00:02.147) 0:02:12.274 ********** 2026-04-10 01:08:52.598347 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.598353 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.598359 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:08:52.598364 | orchestrator | 2026-04-10 01:08:52.598369 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-10 01:08:52.598374 | orchestrator | Friday 10 April 2026 01:02:02 +0000 (0:00:20.412) 0:02:32.686 ********** 2026-04-10 01:08:52.598380 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.598386 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.598393 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:08:52.598400 | orchestrator | 2026-04-10 01:08:52.598407 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-10 01:08:52.598413 | orchestrator | Friday 10 April 2026 01:02:15 +0000 (0:00:12.773) 0:02:45.460 ********** 2026-04-10 01:08:52.598420 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:08:52.598427 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.598436 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.598449 | orchestrator | 2026-04-10 01:08:52.598455 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-04-10 01:08:52.598462 | orchestrator | Friday 10 April 2026 01:02:16 +0000 (0:00:01.399) 0:02:46.859 ********** 2026-04-10 01:08:52.598468 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.598475 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.598482 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:08:52.598489 | orchestrator | 2026-04-10 01:08:52.598495 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-04-10 01:08:52.598502 | orchestrator | Friday 10 April 2026 01:02:30 +0000 (0:00:13.106) 0:02:59.965 ********** 2026-04-10 01:08:52.598508 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.598522 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.598526 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.598530 | orchestrator | 2026-04-10 01:08:52.598534 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-10 01:08:52.598538 | orchestrator | Friday 10 April 2026 01:02:32 +0000 (0:00:02.910) 0:03:02.876 ********** 2026-04-10 01:08:52.598541 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.598545 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.598549 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.598553 | orchestrator | 2026-04-10 01:08:52.598556 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-04-10 01:08:52.598560 | orchestrator | 2026-04-10 01:08:52.598564 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-10 01:08:52.598568 | orchestrator | Friday 10 April 2026 01:02:33 +0000 (0:00:00.429) 0:03:03.305 ********** 2026-04-10 01:08:52.598571 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 01:08:52.598576 | orchestrator | 2026-04-10 01:08:52.598579 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-04-10 01:08:52.598583 | orchestrator | Friday 10 April 2026 01:02:34 +0000 (0:00:00.715) 0:03:04.021 ********** 2026-04-10 01:08:52.598587 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-04-10 01:08:52.598591 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-04-10 01:08:52.598595 | orchestrator | 2026-04-10 01:08:52.598598 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-04-10 01:08:52.598602 | orchestrator | Friday 10 April 2026 01:02:37 +0000 (0:00:03.090) 0:03:07.112 ********** 2026-04-10 01:08:52.598613 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-04-10 01:08:52.598618 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-04-10 01:08:52.598622 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-04-10 01:08:52.598626 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-04-10 01:08:52.598630 | orchestrator | 2026-04-10 01:08:52.598633 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-04-10 01:08:52.598637 | orchestrator | Friday 10 April 2026 01:02:43 +0000 (0:00:06.703) 0:03:13.815 ********** 2026-04-10 01:08:52.598641 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-10 01:08:52.598645 | orchestrator | 2026-04-10 01:08:52.598648 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-04-10 01:08:52.598652 | orchestrator | Friday 10 April 2026 01:02:47 +0000 (0:00:03.878) 0:03:17.693 ********** 2026-04-10 01:08:52.598656 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-04-10 01:08:52.598660 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-10 01:08:52.598663 | orchestrator | 2026-04-10 01:08:52.598667 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-04-10 01:08:52.598671 | orchestrator | Friday 10 April 2026 01:02:51 +0000 (0:00:04.028) 0:03:21.722 ********** 2026-04-10 01:08:52.598675 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-10 01:08:52.598679 | orchestrator | 2026-04-10 01:08:52.598682 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-04-10 01:08:52.598686 | orchestrator | Friday 10 April 2026 01:02:55 +0000 (0:00:04.219) 0:03:25.941 ********** 2026-04-10 01:08:52.598690 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-04-10 01:08:52.598694 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-04-10 01:08:52.598697 | orchestrator | 2026-04-10 01:08:52.598701 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-10 01:08:52.598709 | orchestrator | Friday 10 April 2026 01:03:03 +0000 (0:00:07.520) 0:03:33.462 ********** 2026-04-10 01:08:52.598717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-10 01:08:52.598724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.598732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-10 01:08:52.598736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.598745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-10 01:08:52.598750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.598754 | orchestrator | 2026-04-10 01:08:52.598758 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-04-10 01:08:52.598764 | orchestrator | Friday 10 April 2026 01:03:05 +0000 (0:00:02.175) 0:03:35.637 ********** 2026-04-10 01:08:52.598768 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.598772 | orchestrator | 2026-04-10 01:08:52.598776 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-04-10 01:08:52.598780 | orchestrator | Friday 10 April 2026 01:03:05 +0000 (0:00:00.124) 0:03:35.762 ********** 2026-04-10 01:08:52.598784 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.598788 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.598792 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.598795 | orchestrator | 2026-04-10 01:08:52.598799 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-04-10 01:08:52.598803 | orchestrator | Friday 10 April 2026 01:03:06 +0000 (0:00:00.382) 0:03:36.145 ********** 2026-04-10 01:08:52.598807 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-10 01:08:52.598811 | orchestrator | 2026-04-10 01:08:52.598815 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-04-10 01:08:52.598818 | orchestrator | Friday 10 April 2026 01:03:07 +0000 (0:00:01.636) 0:03:37.781 ********** 2026-04-10 01:08:52.598822 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.598826 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.598830 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.598834 | orchestrator | 2026-04-10 01:08:52.598837 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-10 01:08:52.598841 | orchestrator | Friday 10 April 2026 01:03:08 +0000 (0:00:00.337) 0:03:38.119 ********** 2026-04-10 01:08:52.598845 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 01:08:52.598849 | orchestrator | 2026-04-10 01:08:52.598853 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-10 01:08:52.598856 | orchestrator | Friday 10 April 2026 01:03:09 +0000 (0:00:01.300) 0:03:39.419 ********** 2026-04-10 01:08:52.598861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-10 01:08:52.598869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-10 01:08:52.598876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-10 01:08:52.598881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.598885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.598892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.598896 | orchestrator | 2026-04-10 01:08:52.598900 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-10 01:08:52.598904 | orchestrator | Friday 10 April 2026 01:03:12 +0000 (0:00:03.412) 0:03:42.832 ********** 2026-04-10 01:08:52.598908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-10 01:08:52.598915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-10 01:08:52.598919 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.598924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-10 01:08:52.598931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-10 01:08:52.598939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-10 01:08:52.598943 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.598947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-10 01:08:52.598951 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.598955 | orchestrator | 2026-04-10 01:08:52.598959 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-10 01:08:52.598963 | orchestrator | Friday 10 April 2026 01:03:13 +0000 (0:00:00.607) 0:03:43.439 ********** 2026-04-10 01:08:52.598967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-10 01:08:52.598971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-10 01:08:52.598975 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.598982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-10 01:08:52.598989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-10 01:08:52.598993 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.598997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-10 01:08:52.599001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-10 01:08:52.599005 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.599009 | orchestrator | 2026-04-10 01:08:52.599013 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-04-10 01:08:52.599017 | orchestrator | Friday 10 April 2026 01:03:15 +0000 (0:00:02.302) 0:03:45.741 ********** 2026-04-10 01:08:52.599024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-10 01:08:52.599031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-10 01:08:52.599036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-10 01:08:52.599040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.599047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.599054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.599058 | orchestrator | 2026-04-10 01:08:52.599062 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-04-10 01:08:52.599066 | orchestrator | Friday 10 April 2026 01:03:18 +0000 (0:00:02.580) 0:03:48.322 ********** 2026-04-10 01:08:52.599070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-10 01:08:52.599074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-10 01:08:52.599083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-10 01:08:52.599089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.599094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.599098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.599102 | orchestrator | 2026-04-10 01:08:52.599106 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-04-10 01:08:52.599109 | orchestrator | Friday 10 April 2026 01:03:27 +0000 (0:00:09.199) 0:03:57.521 ********** 2026-04-10 01:08:52.599114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-10 01:08:52.599125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-10 01:08:52.599129 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.599134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-10 01:08:52.599138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-10 01:08:52.599142 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.599146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-10 01:08:52.599152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-10 01:08:52.599156 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.599160 | orchestrator | 2026-04-10 01:08:52.599164 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-04-10 01:08:52.599168 | orchestrator | Friday 10 April 2026 01:03:28 +0000 (0:00:01.046) 0:03:58.568 ********** 2026-04-10 01:08:52.599172 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:08:52.599176 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:08:52.599180 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:08:52.599183 | orchestrator | 2026-04-10 01:08:52.599189 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-04-10 01:08:52.599193 | orchestrator | Friday 10 April 2026 01:03:31 +0000 (0:00:02.783) 0:04:01.352 ********** 2026-04-10 01:08:52.599197 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.599201 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.599205 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.599209 | orchestrator | 2026-04-10 01:08:52.599213 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-04-10 01:08:52.599216 | orchestrator | Friday 10 April 2026 01:03:32 +0000 (0:00:00.611) 0:04:01.964 ********** 2026-04-10 01:08:52.599220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-10 01:08:52.599225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-10 01:08:52.599234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-10 01:08:52.599238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.599242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.599247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.599251 | orchestrator | 2026-04-10 01:08:52.599254 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-10 01:08:52.599258 | orchestrator | Friday 10 April 2026 01:03:34 +0000 (0:00:02.676) 0:04:04.640 ********** 2026-04-10 01:08:52.599262 | orchestrator | 2026-04-10 01:08:52.599266 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-10 01:08:52.599270 | orchestrator | Friday 10 April 2026 01:03:35 +0000 (0:00:00.427) 0:04:05.067 ********** 2026-04-10 01:08:52.599274 | orchestrator | 2026-04-10 01:08:52.599277 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-10 01:08:52.599281 | orchestrator | Friday 10 April 2026 01:03:35 +0000 (0:00:00.311) 0:04:05.378 ********** 2026-04-10 01:08:52.599287 | orchestrator | 2026-04-10 01:08:52.599291 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-04-10 01:08:52.599295 | orchestrator | Friday 10 April 2026 01:03:35 +0000 (0:00:00.316) 0:04:05.695 ********** 2026-04-10 01:08:52.599299 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:08:52.599303 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:08:52.599306 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:08:52.599310 | orchestrator | 2026-04-10 01:08:52.599314 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-04-10 01:08:52.599318 | orchestrator | Friday 10 April 2026 01:03:55 +0000 (0:00:19.458) 0:04:25.154 ********** 2026-04-10 01:08:52.599322 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:08:52.599325 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:08:52.599329 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:08:52.599333 | orchestrator | 2026-04-10 01:08:52.599337 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-04-10 01:08:52.599341 | orchestrator | 2026-04-10 01:08:52.599344 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-10 01:08:52.599348 | orchestrator | Friday 10 April 2026 01:04:05 +0000 (0:00:10.141) 0:04:35.296 ********** 2026-04-10 01:08:52.599352 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 01:08:52.599356 | orchestrator | 2026-04-10 01:08:52.599360 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-10 01:08:52.599364 | orchestrator | Friday 10 April 2026 01:04:06 +0000 (0:00:01.154) 0:04:36.450 ********** 2026-04-10 01:08:52.599368 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:08:52.599371 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:08:52.599375 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:08:52.599379 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.599383 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.599387 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.599390 | orchestrator | 2026-04-10 01:08:52.599394 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-04-10 01:08:52.599398 | orchestrator | Friday 10 April 2026 01:04:07 +0000 (0:00:00.670) 0:04:37.121 ********** 2026-04-10 01:08:52.599402 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.599406 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.599409 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.599413 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 01:08:52.599417 | orchestrator | 2026-04-10 01:08:52.599421 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-10 01:08:52.599427 | orchestrator | Friday 10 April 2026 01:04:08 +0000 (0:00:00.882) 0:04:38.003 ********** 2026-04-10 01:08:52.599431 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-10 01:08:52.599435 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-10 01:08:52.599439 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-10 01:08:52.599443 | orchestrator | 2026-04-10 01:08:52.599446 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-10 01:08:52.599450 | orchestrator | Friday 10 April 2026 01:04:09 +0000 (0:00:00.971) 0:04:38.975 ********** 2026-04-10 01:08:52.599454 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-04-10 01:08:52.599458 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-04-10 01:08:52.599462 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-04-10 01:08:52.599466 | orchestrator | 2026-04-10 01:08:52.599470 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-10 01:08:52.599474 | orchestrator | Friday 10 April 2026 01:04:10 +0000 (0:00:01.084) 0:04:40.059 ********** 2026-04-10 01:08:52.599478 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-04-10 01:08:52.599481 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:08:52.599488 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-04-10 01:08:52.599492 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:08:52.599495 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-04-10 01:08:52.599499 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:08:52.599503 | orchestrator | 2026-04-10 01:08:52.599507 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-04-10 01:08:52.599520 | orchestrator | Friday 10 April 2026 01:04:10 +0000 (0:00:00.654) 0:04:40.714 ********** 2026-04-10 01:08:52.599524 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-10 01:08:52.599528 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-10 01:08:52.599532 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.599535 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-10 01:08:52.599539 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-10 01:08:52.599543 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.599547 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-10 01:08:52.599551 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-10 01:08:52.599555 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.599558 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-10 01:08:52.599562 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-10 01:08:52.599566 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-10 01:08:52.599570 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-10 01:08:52.599574 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-10 01:08:52.599578 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-10 01:08:52.599581 | orchestrator | 2026-04-10 01:08:52.599585 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-04-10 01:08:52.599589 | orchestrator | Friday 10 April 2026 01:04:12 +0000 (0:00:01.956) 0:04:42.670 ********** 2026-04-10 01:08:52.599593 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.599597 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.599601 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.599604 | orchestrator | changed: [testbed-node-3] 2026-04-10 01:08:52.599608 | orchestrator | changed: [testbed-node-4] 2026-04-10 01:08:52.599612 | orchestrator | changed: [testbed-node-5] 2026-04-10 01:08:52.599616 | orchestrator | 2026-04-10 01:08:52.599620 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-04-10 01:08:52.599623 | orchestrator | Friday 10 April 2026 01:04:13 +0000 (0:00:01.110) 0:04:43.780 ********** 2026-04-10 01:08:52.599627 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.599631 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.599635 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.599639 | orchestrator | changed: [testbed-node-4] 2026-04-10 01:08:52.599642 | orchestrator | changed: [testbed-node-3] 2026-04-10 01:08:52.599646 | orchestrator | changed: [testbed-node-5] 2026-04-10 01:08:52.599650 | orchestrator | 2026-04-10 01:08:52.599654 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-10 01:08:52.599658 | orchestrator | Friday 10 April 2026 01:04:15 +0000 (0:00:01.804) 0:04:45.585 ********** 2026-04-10 01:08:52.599662 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-10 01:08:52.599938 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-10 01:08:52.599956 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-10 01:08:52.599963 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-10 01:08:52.599971 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-10 01:08:52.599977 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-10 01:08:52.599994 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.600001 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.600007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-10 01:08:52.600013 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.600019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-10 01:08:52.600025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-10 01:08:52.600035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.600044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.600051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.600057 | orchestrator | 2026-04-10 01:08:52.600062 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-10 01:08:52.600068 | orchestrator | Friday 10 April 2026 01:04:17 +0000 (0:00:02.073) 0:04:47.658 ********** 2026-04-10 01:08:52.600074 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 01:08:52.600081 | orchestrator | 2026-04-10 01:08:52.600086 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-10 01:08:52.600092 | orchestrator | Friday 10 April 2026 01:04:18 +0000 (0:00:01.278) 0:04:48.937 ********** 2026-04-10 01:08:52.600098 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-10 01:08:52.600104 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-10 01:08:52.600122 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-10 01:08:52.600129 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-10 01:08:52.600135 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-10 01:08:52.600142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-10 01:08:52.600147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-10 01:08:52.600153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-10 01:08:52.600164 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-10 01:08:52.600176 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.600182 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.600188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.600195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.600201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.600211 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.600218 | orchestrator | 2026-04-10 01:08:52.600223 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-10 01:08:52.600229 | orchestrator | Friday 10 April 2026 01:04:22 +0000 (0:00:03.509) 0:04:52.446 ********** 2026-04-10 01:08:52.600239 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-10 01:08:52.600247 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-10 01:08:52.600253 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-10 01:08:52.600259 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:08:52.600266 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-10 01:08:52.600328 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-10 01:08:52.600340 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-10 01:08:52.600346 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:08:52.600353 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-10 01:08:52.600359 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-10 01:08:52.600376 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-10 01:08:52.600388 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:08:52.600394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-10 01:08:52.600402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-10 01:08:52.600408 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.600418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-10 01:08:52.600425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-10 01:08:52.600431 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.600437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-10 01:08:52.600443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-10 01:08:52.600453 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.600459 | orchestrator | 2026-04-10 01:08:52.600465 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-10 01:08:52.600471 | orchestrator | Friday 10 April 2026 01:04:24 +0000 (0:00:01.794) 0:04:54.240 ********** 2026-04-10 01:08:52.600478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-10 01:08:52.600717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-10 01:08:52.600754 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-10 01:08:52.600762 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:08:52.600769 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-10 01:08:52.600775 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-10 01:08:52.600787 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-10 01:08:52.600793 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:08:52.600800 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-10 01:08:52.600806 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-10 01:08:52.600817 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-10 01:08:52.600823 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:08:52.600829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-10 01:08:52.600835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-10 01:08:52.600873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-10 01:08:52.601232 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.601281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-10 01:08:52.601291 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.601298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-10 01:08:52.601338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-10 01:08:52.601346 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.601352 | orchestrator | 2026-04-10 01:08:52.601359 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-10 01:08:52.601365 | orchestrator | Friday 10 April 2026 01:04:26 +0000 (0:00:02.107) 0:04:56.348 ********** 2026-04-10 01:08:52.601371 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.601377 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.601383 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.601389 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 01:08:52.601395 | orchestrator | 2026-04-10 01:08:52.601401 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-04-10 01:08:52.601407 | orchestrator | Friday 10 April 2026 01:04:27 +0000 (0:00:01.077) 0:04:57.425 ********** 2026-04-10 01:08:52.601413 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-10 01:08:52.601425 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-10 01:08:52.601431 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-10 01:08:52.601438 | orchestrator | 2026-04-10 01:08:52.601443 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-04-10 01:08:52.601449 | orchestrator | Friday 10 April 2026 01:04:28 +0000 (0:00:01.037) 0:04:58.463 ********** 2026-04-10 01:08:52.601455 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-10 01:08:52.601509 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-10 01:08:52.601725 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-10 01:08:52.601740 | orchestrator | 2026-04-10 01:08:52.601747 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-04-10 01:08:52.601753 | orchestrator | Friday 10 April 2026 01:04:31 +0000 (0:00:03.058) 0:05:01.521 ********** 2026-04-10 01:08:52.601759 | orchestrator | ok: [testbed-node-3] 2026-04-10 01:08:52.601765 | orchestrator | ok: [testbed-node-4] 2026-04-10 01:08:52.601771 | orchestrator | ok: [testbed-node-5] 2026-04-10 01:08:52.601777 | orchestrator | 2026-04-10 01:08:52.601783 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-04-10 01:08:52.601789 | orchestrator | Friday 10 April 2026 01:04:32 +0000 (0:00:01.320) 0:05:02.842 ********** 2026-04-10 01:08:52.601794 | orchestrator | ok: [testbed-node-3] 2026-04-10 01:08:52.601800 | orchestrator | ok: [testbed-node-4] 2026-04-10 01:08:52.601806 | orchestrator | ok: [testbed-node-5] 2026-04-10 01:08:52.601812 | orchestrator | 2026-04-10 01:08:52.601817 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-04-10 01:08:52.601824 | orchestrator | Friday 10 April 2026 01:04:33 +0000 (0:00:01.068) 0:05:03.910 ********** 2026-04-10 01:08:52.601830 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-10 01:08:52.601836 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-10 01:08:52.601842 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-10 01:08:52.601847 | orchestrator | 2026-04-10 01:08:52.601853 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-04-10 01:08:52.601859 | orchestrator | Friday 10 April 2026 01:04:35 +0000 (0:00:01.240) 0:05:05.150 ********** 2026-04-10 01:08:52.601865 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-10 01:08:52.601870 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-10 01:08:52.601876 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-10 01:08:52.601882 | orchestrator | 2026-04-10 01:08:52.601888 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-04-10 01:08:52.601895 | orchestrator | Friday 10 April 2026 01:04:36 +0000 (0:00:01.300) 0:05:06.451 ********** 2026-04-10 01:08:52.601901 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-10 01:08:52.602041 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-10 01:08:52.602051 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-10 01:08:52.602057 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-04-10 01:08:52.602063 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-04-10 01:08:52.602071 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-04-10 01:08:52.602077 | orchestrator | 2026-04-10 01:08:52.602084 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-04-10 01:08:52.602091 | orchestrator | Friday 10 April 2026 01:04:41 +0000 (0:00:04.698) 0:05:11.150 ********** 2026-04-10 01:08:52.602097 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:08:52.602104 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:08:52.602110 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:08:52.602114 | orchestrator | 2026-04-10 01:08:52.602117 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-04-10 01:08:52.602121 | orchestrator | Friday 10 April 2026 01:04:41 +0000 (0:00:00.305) 0:05:11.456 ********** 2026-04-10 01:08:52.602125 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:08:52.602135 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:08:52.602139 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:08:52.602143 | orchestrator | 2026-04-10 01:08:52.602147 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-04-10 01:08:52.602150 | orchestrator | Friday 10 April 2026 01:04:41 +0000 (0:00:00.236) 0:05:11.692 ********** 2026-04-10 01:08:52.602155 | orchestrator | changed: [testbed-node-3] 2026-04-10 01:08:52.602162 | orchestrator | changed: [testbed-node-4] 2026-04-10 01:08:52.602167 | orchestrator | changed: [testbed-node-5] 2026-04-10 01:08:52.602172 | orchestrator | 2026-04-10 01:08:52.602177 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-04-10 01:08:52.602182 | orchestrator | Friday 10 April 2026 01:04:43 +0000 (0:00:01.378) 0:05:13.071 ********** 2026-04-10 01:08:52.602217 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-10 01:08:52.602226 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-10 01:08:52.602233 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-10 01:08:52.602239 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-10 01:08:52.602246 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-10 01:08:52.602252 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-10 01:08:52.602258 | orchestrator | 2026-04-10 01:08:52.602265 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-04-10 01:08:52.602271 | orchestrator | Friday 10 April 2026 01:04:46 +0000 (0:00:03.135) 0:05:16.206 ********** 2026-04-10 01:08:52.602278 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-10 01:08:52.602284 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-10 01:08:52.602291 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-10 01:08:52.602297 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-10 01:08:52.602304 | orchestrator | changed: [testbed-node-3] 2026-04-10 01:08:52.602308 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-10 01:08:52.602312 | orchestrator | changed: [testbed-node-4] 2026-04-10 01:08:52.602316 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-10 01:08:52.602319 | orchestrator | changed: [testbed-node-5] 2026-04-10 01:08:52.602323 | orchestrator | 2026-04-10 01:08:52.602327 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-04-10 01:08:52.602331 | orchestrator | Friday 10 April 2026 01:04:49 +0000 (0:00:03.364) 0:05:19.571 ********** 2026-04-10 01:08:52.602335 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.602338 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.602342 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.602346 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-10 01:08:52.602350 | orchestrator | 2026-04-10 01:08:52.602354 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-04-10 01:08:52.602358 | orchestrator | Friday 10 April 2026 01:04:52 +0000 (0:00:03.327) 0:05:22.899 ********** 2026-04-10 01:08:52.602361 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-10 01:08:52.602365 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-10 01:08:52.602369 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-10 01:08:52.602373 | orchestrator | 2026-04-10 01:08:52.602376 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-04-10 01:08:52.602380 | orchestrator | Friday 10 April 2026 01:04:53 +0000 (0:00:01.011) 0:05:23.910 ********** 2026-04-10 01:08:52.602389 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:08:52.602392 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:08:52.602396 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:08:52.602400 | orchestrator | 2026-04-10 01:08:52.602404 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-04-10 01:08:52.602410 | orchestrator | Friday 10 April 2026 01:04:54 +0000 (0:00:00.303) 0:05:24.214 ********** 2026-04-10 01:08:52.602416 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:08:52.602422 | orchestrator | 2026-04-10 01:08:52.602428 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-04-10 01:08:52.602434 | orchestrator | Friday 10 April 2026 01:04:54 +0000 (0:00:00.123) 0:05:24.337 ********** 2026-04-10 01:08:52.602440 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:08:52.602447 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:08:52.602453 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:08:52.602460 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.602466 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.602473 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.602479 | orchestrator | 2026-04-10 01:08:52.602485 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-04-10 01:08:52.602493 | orchestrator | Friday 10 April 2026 01:04:55 +0000 (0:00:00.896) 0:05:25.234 ********** 2026-04-10 01:08:52.602497 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-10 01:08:52.602500 | orchestrator | 2026-04-10 01:08:52.602504 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-04-10 01:08:52.602509 | orchestrator | Friday 10 April 2026 01:04:56 +0000 (0:00:00.744) 0:05:25.978 ********** 2026-04-10 01:08:52.602530 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:08:52.602541 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:08:52.602547 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:08:52.602553 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.602559 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.602565 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.602570 | orchestrator | 2026-04-10 01:08:52.602576 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-04-10 01:08:52.602588 | orchestrator | Friday 10 April 2026 01:04:56 +0000 (0:00:00.671) 0:05:26.650 ********** 2026-04-10 01:08:52.602622 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-10 01:08:52.602631 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-10 01:08:52.602644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-10 01:08:52.602651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-10 01:08:52.602658 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-10 01:08:52.602665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-10 01:08:52.602692 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-10 01:08:52.602699 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-10 01:08:52.602708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.602713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.602718 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-10 01:08:52.602722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.602738 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.602743 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.602748 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.602756 | orchestrator | 2026-04-10 01:08:52.602760 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-04-10 01:08:52.602765 | orchestrator | Friday 10 April 2026 01:05:02 +0000 (0:00:05.567) 0:05:32.217 ********** 2026-04-10 01:08:52.602770 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-10 01:08:52.602775 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-10 01:08:52.602780 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-10 01:08:52.602795 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-10 01:08:52.602800 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-10 01:08:52.602807 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-10 01:08:52.602812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-10 01:08:52.602819 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.602846 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.602854 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.602865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-10 01:08:52.602872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-10 01:08:52.602879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.602886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.602893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.602899 | orchestrator | 2026-04-10 01:08:52.602906 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-04-10 01:08:52.602913 | orchestrator | Friday 10 April 2026 01:05:09 +0000 (0:00:07.259) 0:05:39.477 ********** 2026-04-10 01:08:52.602919 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:08:52.602926 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:08:52.602932 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:08:52.602937 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.602955 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.602960 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.602968 | orchestrator | 2026-04-10 01:08:52.602973 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-04-10 01:08:52.602980 | orchestrator | Friday 10 April 2026 01:05:11 +0000 (0:00:02.260) 0:05:41.737 ********** 2026-04-10 01:08:52.602986 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-10 01:08:52.602992 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-10 01:08:52.602998 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-10 01:08:52.603005 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-10 01:08:52.603011 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-10 01:08:52.603017 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-10 01:08:52.603024 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.603031 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-10 01:08:52.603038 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-10 01:08:52.603044 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.603050 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-10 01:08:52.603056 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.603063 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-10 01:08:52.603069 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-10 01:08:52.603075 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-10 01:08:52.603082 | orchestrator | 2026-04-10 01:08:52.603088 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-04-10 01:08:52.603094 | orchestrator | Friday 10 April 2026 01:05:16 +0000 (0:00:04.434) 0:05:46.171 ********** 2026-04-10 01:08:52.603101 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:08:52.603108 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:08:52.603112 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:08:52.603116 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.603119 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.603123 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.603127 | orchestrator | 2026-04-10 01:08:52.603131 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-04-10 01:08:52.603135 | orchestrator | Friday 10 April 2026 01:05:16 +0000 (0:00:00.663) 0:05:46.835 ********** 2026-04-10 01:08:52.603139 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-10 01:08:52.603143 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-10 01:08:52.603147 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-10 01:08:52.603150 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-10 01:08:52.603154 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-10 01:08:52.603158 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-10 01:08:52.603162 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-10 01:08:52.603166 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-10 01:08:52.603173 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-10 01:08:52.603177 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-10 01:08:52.603181 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.603184 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-10 01:08:52.603188 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.603192 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-10 01:08:52.603196 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.603200 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-10 01:08:52.603204 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-10 01:08:52.603208 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-10 01:08:52.603211 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-10 01:08:52.603232 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-10 01:08:52.603236 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-10 01:08:52.603240 | orchestrator | 2026-04-10 01:08:52.603244 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-04-10 01:08:52.603248 | orchestrator | Friday 10 April 2026 01:05:22 +0000 (0:00:05.420) 0:05:52.256 ********** 2026-04-10 01:08:52.603252 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-10 01:08:52.603256 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-10 01:08:52.603259 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-10 01:08:52.603263 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-10 01:08:52.603267 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-10 01:08:52.603270 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-10 01:08:52.603274 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-10 01:08:52.603278 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-10 01:08:52.603282 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-10 01:08:52.603285 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-10 01:08:52.603289 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-10 01:08:52.603293 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-10 01:08:52.603297 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-10 01:08:52.603301 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.603304 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-10 01:08:52.603308 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.603312 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-10 01:08:52.603316 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-10 01:08:52.603320 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-10 01:08:52.603327 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-10 01:08:52.603331 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.603334 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-10 01:08:52.603338 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-10 01:08:52.603342 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-10 01:08:52.603346 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-10 01:08:52.603349 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-10 01:08:52.603353 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-10 01:08:52.603357 | orchestrator | 2026-04-10 01:08:52.603361 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-04-10 01:08:52.603365 | orchestrator | Friday 10 April 2026 01:05:28 +0000 (0:00:06.541) 0:05:58.798 ********** 2026-04-10 01:08:52.603368 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:08:52.603372 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:08:52.603376 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:08:52.603380 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.603384 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.603387 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.603391 | orchestrator | 2026-04-10 01:08:52.603395 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-04-10 01:08:52.603399 | orchestrator | Friday 10 April 2026 01:05:29 +0000 (0:00:00.476) 0:05:59.274 ********** 2026-04-10 01:08:52.603402 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:08:52.603406 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:08:52.603410 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:08:52.603414 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.603418 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.603421 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.603425 | orchestrator | 2026-04-10 01:08:52.603429 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-04-10 01:08:52.603433 | orchestrator | Friday 10 April 2026 01:05:29 +0000 (0:00:00.628) 0:05:59.903 ********** 2026-04-10 01:08:52.603437 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.603440 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.603444 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.603448 | orchestrator | changed: [testbed-node-3] 2026-04-10 01:08:52.603452 | orchestrator | changed: [testbed-node-5] 2026-04-10 01:08:52.603456 | orchestrator | changed: [testbed-node-4] 2026-04-10 01:08:52.603460 | orchestrator | 2026-04-10 01:08:52.603464 | orchestrator | TASK [nova-cell : Generating 'hostid' file for nova_compute] ******************* 2026-04-10 01:08:52.603468 | orchestrator | Friday 10 April 2026 01:05:31 +0000 (0:00:01.692) 0:06:01.596 ********** 2026-04-10 01:08:52.603472 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.603487 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.603491 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.603495 | orchestrator | changed: [testbed-node-3] 2026-04-10 01:08:52.603499 | orchestrator | changed: [testbed-node-4] 2026-04-10 01:08:52.603502 | orchestrator | changed: [testbed-node-5] 2026-04-10 01:08:52.603506 | orchestrator | 2026-04-10 01:08:52.603510 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-04-10 01:08:52.603528 | orchestrator | Friday 10 April 2026 01:05:33 +0000 (0:00:01.776) 0:06:03.372 ********** 2026-04-10 01:08:52.603533 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-10 01:08:52.603541 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-10 01:08:52.603545 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-10 01:08:52.603550 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-10 01:08:52.603554 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-10 01:08:52.603558 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:08:52.603574 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-10 01:08:52.603583 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:08:52.603590 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-10 01:08:52.603596 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-10 01:08:52.603603 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-10 01:08:52.603608 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:08:52.603614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-10 01:08:52.603639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-10 01:08:52.603647 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.603654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-10 01:08:52.603665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-10 01:08:52.603671 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.603678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-10 01:08:52.603685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-10 01:08:52.603694 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.603701 | orchestrator | 2026-04-10 01:08:52.603707 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-04-10 01:08:52.603713 | orchestrator | Friday 10 April 2026 01:05:34 +0000 (0:00:01.191) 0:06:04.563 ********** 2026-04-10 01:08:52.603719 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-10 01:08:52.603725 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-10 01:08:52.603731 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:08:52.603737 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-10 01:08:52.603744 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-10 01:08:52.603750 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:08:52.603756 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-10 01:08:52.603762 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-10 01:08:52.603768 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:08:52.603775 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-10 01:08:52.603781 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-10 01:08:52.603787 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-10 01:08:52.603793 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-10 01:08:52.603797 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.603802 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.603819 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-10 01:08:52.603828 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-10 01:08:52.603834 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.603840 | orchestrator | 2026-04-10 01:08:52.603846 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-04-10 01:08:52.603852 | orchestrator | Friday 10 April 2026 01:05:35 +0000 (0:00:00.717) 0:06:05.281 ********** 2026-04-10 01:08:52.603881 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-10 01:08:52.603890 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-10 01:08:52.603897 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-10 01:08:52.603904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-10 01:08:52.603911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-10 01:08:52.603936 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-10 01:08:52.603942 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-10 01:08:52.603946 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-10 01:08:52.603950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-10 01:08:52.603954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.603958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.603962 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.603981 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.603986 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.603990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.603994 | orchestrator | 2026-04-10 01:08:52.603997 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-10 01:08:52.604001 | orchestrator | Friday 10 April 2026 01:05:37 +0000 (0:00:02.390) 0:06:07.672 ********** 2026-04-10 01:08:52.604005 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:08:52.604009 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:08:52.604012 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:08:52.604016 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.604020 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.604023 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.604027 | orchestrator | 2026-04-10 01:08:52.604031 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-10 01:08:52.604035 | orchestrator | Friday 10 April 2026 01:05:38 +0000 (0:00:00.615) 0:06:08.287 ********** 2026-04-10 01:08:52.604038 | orchestrator | 2026-04-10 01:08:52.604042 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-10 01:08:52.604046 | orchestrator | Friday 10 April 2026 01:05:38 +0000 (0:00:00.126) 0:06:08.413 ********** 2026-04-10 01:08:52.604052 | orchestrator | 2026-04-10 01:08:52.604056 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-10 01:08:52.604060 | orchestrator | Friday 10 April 2026 01:05:38 +0000 (0:00:00.123) 0:06:08.537 ********** 2026-04-10 01:08:52.604063 | orchestrator | 2026-04-10 01:08:52.604067 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-10 01:08:52.604071 | orchestrator | Friday 10 April 2026 01:05:38 +0000 (0:00:00.121) 0:06:08.659 ********** 2026-04-10 01:08:52.604075 | orchestrator | 2026-04-10 01:08:52.604078 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-10 01:08:52.604082 | orchestrator | Friday 10 April 2026 01:05:38 +0000 (0:00:00.121) 0:06:08.780 ********** 2026-04-10 01:08:52.604086 | orchestrator | 2026-04-10 01:08:52.604090 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-10 01:08:52.604093 | orchestrator | Friday 10 April 2026 01:05:39 +0000 (0:00:00.223) 0:06:09.003 ********** 2026-04-10 01:08:52.604097 | orchestrator | 2026-04-10 01:08:52.604101 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-04-10 01:08:52.604105 | orchestrator | Friday 10 April 2026 01:05:39 +0000 (0:00:00.120) 0:06:09.123 ********** 2026-04-10 01:08:52.604108 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:08:52.604112 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:08:52.604116 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:08:52.604119 | orchestrator | 2026-04-10 01:08:52.604123 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-04-10 01:08:52.604127 | orchestrator | Friday 10 April 2026 01:05:45 +0000 (0:00:06.018) 0:06:15.141 ********** 2026-04-10 01:08:52.604131 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:08:52.604135 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:08:52.604138 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:08:52.604142 | orchestrator | 2026-04-10 01:08:52.604146 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-04-10 01:08:52.604149 | orchestrator | Friday 10 April 2026 01:06:03 +0000 (0:00:17.969) 0:06:33.111 ********** 2026-04-10 01:08:52.604154 | orchestrator | changed: [testbed-node-5] 2026-04-10 01:08:52.604160 | orchestrator | changed: [testbed-node-4] 2026-04-10 01:08:52.604166 | orchestrator | changed: [testbed-node-3] 2026-04-10 01:08:52.604175 | orchestrator | 2026-04-10 01:08:52.604201 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-04-10 01:08:52.604208 | orchestrator | Friday 10 April 2026 01:06:23 +0000 (0:00:20.810) 0:06:53.922 ********** 2026-04-10 01:08:52.604214 | orchestrator | changed: [testbed-node-3] 2026-04-10 01:08:52.604221 | orchestrator | changed: [testbed-node-5] 2026-04-10 01:08:52.604227 | orchestrator | changed: [testbed-node-4] 2026-04-10 01:08:52.604233 | orchestrator | 2026-04-10 01:08:52.604240 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-04-10 01:08:52.604247 | orchestrator | Friday 10 April 2026 01:06:47 +0000 (0:00:24.000) 0:07:17.923 ********** 2026-04-10 01:08:52.604253 | orchestrator | changed: [testbed-node-3] 2026-04-10 01:08:52.604259 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-04-10 01:08:52.604266 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-04-10 01:08:52.604271 | orchestrator | changed: [testbed-node-4] 2026-04-10 01:08:52.604275 | orchestrator | changed: [testbed-node-5] 2026-04-10 01:08:52.604279 | orchestrator | 2026-04-10 01:08:52.604283 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-04-10 01:08:52.604287 | orchestrator | Friday 10 April 2026 01:06:54 +0000 (0:00:06.124) 0:07:24.047 ********** 2026-04-10 01:08:52.604290 | orchestrator | changed: [testbed-node-3] 2026-04-10 01:08:52.604294 | orchestrator | changed: [testbed-node-4] 2026-04-10 01:08:52.604298 | orchestrator | changed: [testbed-node-5] 2026-04-10 01:08:52.604301 | orchestrator | 2026-04-10 01:08:52.604305 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-04-10 01:08:52.604313 | orchestrator | Friday 10 April 2026 01:06:54 +0000 (0:00:00.663) 0:07:24.711 ********** 2026-04-10 01:08:52.604317 | orchestrator | changed: [testbed-node-5] 2026-04-10 01:08:52.604320 | orchestrator | changed: [testbed-node-3] 2026-04-10 01:08:52.604324 | orchestrator | changed: [testbed-node-4] 2026-04-10 01:08:52.604328 | orchestrator | 2026-04-10 01:08:52.604332 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-04-10 01:08:52.604335 | orchestrator | Friday 10 April 2026 01:07:17 +0000 (0:00:22.930) 0:07:47.642 ********** 2026-04-10 01:08:52.604339 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:08:52.604343 | orchestrator | 2026-04-10 01:08:52.604347 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-04-10 01:08:52.604351 | orchestrator | Friday 10 April 2026 01:07:18 +0000 (0:00:00.321) 0:07:47.964 ********** 2026-04-10 01:08:52.604354 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:08:52.604358 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.604362 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:08:52.604365 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.604369 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.604373 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-04-10 01:08:52.604377 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-10 01:08:52.604381 | orchestrator | 2026-04-10 01:08:52.604385 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-04-10 01:08:52.604389 | orchestrator | Friday 10 April 2026 01:07:37 +0000 (0:00:19.452) 0:08:07.416 ********** 2026-04-10 01:08:52.604392 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:08:52.604396 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.604400 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.604404 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:08:52.604407 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:08:52.604411 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.604415 | orchestrator | 2026-04-10 01:08:52.604419 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-04-10 01:08:52.604422 | orchestrator | Friday 10 April 2026 01:07:44 +0000 (0:00:07.236) 0:08:14.653 ********** 2026-04-10 01:08:52.604426 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:08:52.604430 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:08:52.604434 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.604437 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.604441 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.604445 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2026-04-10 01:08:52.604449 | orchestrator | 2026-04-10 01:08:52.604453 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-10 01:08:52.604456 | orchestrator | Friday 10 April 2026 01:07:46 +0000 (0:00:02.109) 0:08:16.763 ********** 2026-04-10 01:08:52.604460 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-10 01:08:52.604464 | orchestrator | 2026-04-10 01:08:52.604469 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-10 01:08:52.604475 | orchestrator | Friday 10 April 2026 01:08:00 +0000 (0:00:14.080) 0:08:30.843 ********** 2026-04-10 01:08:52.604481 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-10 01:08:52.604487 | orchestrator | 2026-04-10 01:08:52.604493 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-04-10 01:08:52.604499 | orchestrator | Friday 10 April 2026 01:08:01 +0000 (0:00:00.925) 0:08:31.769 ********** 2026-04-10 01:08:52.604505 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:08:52.604546 | orchestrator | 2026-04-10 01:08:52.604552 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-04-10 01:08:52.604555 | orchestrator | Friday 10 April 2026 01:08:02 +0000 (0:00:00.769) 0:08:32.539 ********** 2026-04-10 01:08:52.604563 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-10 01:08:52.604567 | orchestrator | 2026-04-10 01:08:52.604571 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-04-10 01:08:52.604574 | orchestrator | Friday 10 April 2026 01:08:16 +0000 (0:00:14.100) 0:08:46.639 ********** 2026-04-10 01:08:52.604578 | orchestrator | ok: [testbed-node-3] 2026-04-10 01:08:52.604582 | orchestrator | ok: [testbed-node-4] 2026-04-10 01:08:52.604586 | orchestrator | ok: [testbed-node-5] 2026-04-10 01:08:52.604590 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:08:52.604593 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:08:52.604597 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:08:52.604601 | orchestrator | 2026-04-10 01:08:52.604608 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-04-10 01:08:52.604612 | orchestrator | 2026-04-10 01:08:52.604615 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-04-10 01:08:52.604619 | orchestrator | Friday 10 April 2026 01:08:18 +0000 (0:00:01.916) 0:08:48.555 ********** 2026-04-10 01:08:52.604623 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:08:52.604627 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:08:52.604631 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:08:52.604635 | orchestrator | 2026-04-10 01:08:52.604640 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-04-10 01:08:52.604646 | orchestrator | 2026-04-10 01:08:52.604651 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-04-10 01:08:52.604657 | orchestrator | Friday 10 April 2026 01:08:19 +0000 (0:00:01.257) 0:08:49.813 ********** 2026-04-10 01:08:52.604667 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.604674 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.604680 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.604686 | orchestrator | 2026-04-10 01:08:52.604691 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-04-10 01:08:52.604697 | orchestrator | 2026-04-10 01:08:52.604704 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-04-10 01:08:52.604709 | orchestrator | Friday 10 April 2026 01:08:20 +0000 (0:00:00.521) 0:08:50.334 ********** 2026-04-10 01:08:52.604714 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-04-10 01:08:52.604720 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-10 01:08:52.604725 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-10 01:08:52.604731 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-04-10 01:08:52.604737 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-04-10 01:08:52.604743 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-04-10 01:08:52.604749 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:08:52.604755 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-04-10 01:08:52.604760 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-10 01:08:52.604766 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-10 01:08:52.604771 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-04-10 01:08:52.604777 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-04-10 01:08:52.604783 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-04-10 01:08:52.604788 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-04-10 01:08:52.604794 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-10 01:08:52.604800 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-10 01:08:52.604806 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-04-10 01:08:52.604813 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-04-10 01:08:52.604819 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-04-10 01:08:52.604826 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:08:52.604837 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-04-10 01:08:52.604843 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-10 01:08:52.604849 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-10 01:08:52.604855 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-04-10 01:08:52.604861 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-04-10 01:08:52.604867 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-04-10 01:08:52.604873 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:08:52.604880 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-04-10 01:08:52.604887 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-10 01:08:52.604893 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-10 01:08:52.604899 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-04-10 01:08:52.604906 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-04-10 01:08:52.604912 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-04-10 01:08:52.604918 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.604925 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.604931 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-04-10 01:08:52.604937 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-10 01:08:52.604943 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-10 01:08:52.604949 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-04-10 01:08:52.604955 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-04-10 01:08:52.604962 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-04-10 01:08:52.604968 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.604975 | orchestrator | 2026-04-10 01:08:52.604981 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-04-10 01:08:52.604988 | orchestrator | 2026-04-10 01:08:52.604994 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-04-10 01:08:52.605001 | orchestrator | Friday 10 April 2026 01:08:21 +0000 (0:00:01.340) 0:08:51.675 ********** 2026-04-10 01:08:52.605007 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-04-10 01:08:52.605014 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-10 01:08:52.605020 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.605027 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-04-10 01:08:52.605033 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-10 01:08:52.605046 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.605052 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-04-10 01:08:52.605058 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-10 01:08:52.605062 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.605065 | orchestrator | 2026-04-10 01:08:52.605069 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-04-10 01:08:52.605073 | orchestrator | 2026-04-10 01:08:52.605077 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-04-10 01:08:52.605081 | orchestrator | Friday 10 April 2026 01:08:22 +0000 (0:00:00.768) 0:08:52.444 ********** 2026-04-10 01:08:52.605085 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.605088 | orchestrator | 2026-04-10 01:08:52.605092 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-04-10 01:08:52.605097 | orchestrator | 2026-04-10 01:08:52.605103 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-04-10 01:08:52.605109 | orchestrator | Friday 10 April 2026 01:08:23 +0000 (0:00:00.642) 0:08:53.086 ********** 2026-04-10 01:08:52.605116 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.605122 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.605134 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.605140 | orchestrator | 2026-04-10 01:08:52.605147 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 01:08:52.605153 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 01:08:52.605161 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=46  rescued=0 ignored=0 2026-04-10 01:08:52.605167 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-04-10 01:08:52.605174 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-04-10 01:08:52.605180 | orchestrator | testbed-node-3 : ok=46  changed=28  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-10 01:08:52.605186 | orchestrator | testbed-node-4 : ok=40  changed=28  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-10 01:08:52.605193 | orchestrator | testbed-node-5 : ok=40  changed=28  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-10 01:08:52.605199 | orchestrator | 2026-04-10 01:08:52.605205 | orchestrator | 2026-04-10 01:08:52.605211 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 01:08:52.605217 | orchestrator | Friday 10 April 2026 01:08:23 +0000 (0:00:00.609) 0:08:53.695 ********** 2026-04-10 01:08:52.605224 | orchestrator | =============================================================================== 2026-04-10 01:08:52.605231 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 36.87s 2026-04-10 01:08:52.605237 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 24.00s 2026-04-10 01:08:52.605243 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 22.93s 2026-04-10 01:08:52.605249 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 21.13s 2026-04-10 01:08:52.605255 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 20.81s 2026-04-10 01:08:52.605262 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.41s 2026-04-10 01:08:52.605268 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 19.46s 2026-04-10 01:08:52.605274 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 19.45s 2026-04-10 01:08:52.605280 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 17.97s 2026-04-10 01:08:52.605286 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 17.52s 2026-04-10 01:08:52.605292 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 14.10s 2026-04-10 01:08:52.605298 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.08s 2026-04-10 01:08:52.605305 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.60s 2026-04-10 01:08:52.605312 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.11s 2026-04-10 01:08:52.605318 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.77s 2026-04-10 01:08:52.605325 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.14s 2026-04-10 01:08:52.605331 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.51s 2026-04-10 01:08:52.605338 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 9.20s 2026-04-10 01:08:52.605342 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.52s 2026-04-10 01:08:52.605346 | orchestrator | nova-cell : Copying over nova.conf -------------------------------------- 7.26s 2026-04-10 01:08:52.605353 | orchestrator | 2026-04-10 01:08:52.605357 | orchestrator | 2026-04-10 01:08:52.605361 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-10 01:08:52.605365 | orchestrator | 2026-04-10 01:08:52.605368 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-10 01:08:52.605376 | orchestrator | Friday 10 April 2026 01:05:15 +0000 (0:00:00.293) 0:00:00.293 ********** 2026-04-10 01:08:52.605380 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:08:52.605384 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:08:52.605388 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:08:52.605392 | orchestrator | 2026-04-10 01:08:52.605395 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-10 01:08:52.605399 | orchestrator | Friday 10 April 2026 01:05:16 +0000 (0:00:00.262) 0:00:00.556 ********** 2026-04-10 01:08:52.605403 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-04-10 01:08:52.605407 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-04-10 01:08:52.605411 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-04-10 01:08:52.605414 | orchestrator | 2026-04-10 01:08:52.605418 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-04-10 01:08:52.605422 | orchestrator | 2026-04-10 01:08:52.605426 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-10 01:08:52.605429 | orchestrator | Friday 10 April 2026 01:05:16 +0000 (0:00:00.277) 0:00:00.833 ********** 2026-04-10 01:08:52.605433 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 01:08:52.605437 | orchestrator | 2026-04-10 01:08:52.605441 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-04-10 01:08:52.605445 | orchestrator | Friday 10 April 2026 01:05:17 +0000 (0:00:00.805) 0:00:01.639 ********** 2026-04-10 01:08:52.605448 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-04-10 01:08:52.605452 | orchestrator | 2026-04-10 01:08:52.605458 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-04-10 01:08:52.605463 | orchestrator | Friday 10 April 2026 01:05:21 +0000 (0:00:03.798) 0:00:05.437 ********** 2026-04-10 01:08:52.605467 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-04-10 01:08:52.605471 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-04-10 01:08:52.605475 | orchestrator | 2026-04-10 01:08:52.605479 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-04-10 01:08:52.605482 | orchestrator | Friday 10 April 2026 01:05:27 +0000 (0:00:06.230) 0:00:11.667 ********** 2026-04-10 01:08:52.605486 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-10 01:08:52.605490 | orchestrator | 2026-04-10 01:08:52.605496 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-04-10 01:08:52.605502 | orchestrator | Friday 10 April 2026 01:05:30 +0000 (0:00:03.201) 0:00:14.869 ********** 2026-04-10 01:08:52.605509 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-04-10 01:08:52.605550 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-10 01:08:52.605557 | orchestrator | 2026-04-10 01:08:52.605562 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-04-10 01:08:52.605568 | orchestrator | Friday 10 April 2026 01:05:33 +0000 (0:00:03.380) 0:00:18.250 ********** 2026-04-10 01:08:52.605574 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-10 01:08:52.605580 | orchestrator | 2026-04-10 01:08:52.605586 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-04-10 01:08:52.605592 | orchestrator | Friday 10 April 2026 01:05:37 +0000 (0:00:03.244) 0:00:21.494 ********** 2026-04-10 01:08:52.605598 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-04-10 01:08:52.605605 | orchestrator | 2026-04-10 01:08:52.605611 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-04-10 01:08:52.605625 | orchestrator | Friday 10 April 2026 01:05:40 +0000 (0:00:03.420) 0:00:24.914 ********** 2026-04-10 01:08:52.605634 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:08:52.605641 | orchestrator | 2026-04-10 01:08:52.605647 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-04-10 01:08:52.605653 | orchestrator | Friday 10 April 2026 01:05:43 +0000 (0:00:03.152) 0:00:28.066 ********** 2026-04-10 01:08:52.605659 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:08:52.605665 | orchestrator | 2026-04-10 01:08:52.605672 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-04-10 01:08:52.605678 | orchestrator | Friday 10 April 2026 01:05:46 +0000 (0:00:03.292) 0:00:31.359 ********** 2026-04-10 01:08:52.605683 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:08:52.605689 | orchestrator | 2026-04-10 01:08:52.605695 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-04-10 01:08:52.605701 | orchestrator | Friday 10 April 2026 01:05:50 +0000 (0:00:03.570) 0:00:34.929 ********** 2026-04-10 01:08:52.605708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-10 01:08:52.605721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.605729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-10 01:08:52.605736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-10 01:08:52.605749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.605756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.605763 | orchestrator | 2026-04-10 01:08:52.605767 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-04-10 01:08:52.605773 | orchestrator | Friday 10 April 2026 01:05:52 +0000 (0:00:01.803) 0:00:36.733 ********** 2026-04-10 01:08:52.605777 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.605781 | orchestrator | 2026-04-10 01:08:52.605785 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-04-10 01:08:52.605789 | orchestrator | Friday 10 April 2026 01:05:52 +0000 (0:00:00.128) 0:00:36.861 ********** 2026-04-10 01:08:52.605793 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.605797 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.605800 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.605804 | orchestrator | 2026-04-10 01:08:52.605808 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-04-10 01:08:52.605812 | orchestrator | Friday 10 April 2026 01:05:52 +0000 (0:00:00.277) 0:00:37.139 ********** 2026-04-10 01:08:52.605815 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-10 01:08:52.605819 | orchestrator | 2026-04-10 01:08:52.605823 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-04-10 01:08:52.605827 | orchestrator | Friday 10 April 2026 01:05:53 +0000 (0:00:00.856) 0:00:37.996 ********** 2026-04-10 01:08:52.605831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-10 01:08:52.605839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-10 01:08:52.605846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-10 01:08:52.605860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.605867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.605873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.605884 | orchestrator | 2026-04-10 01:08:52.605891 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-04-10 01:08:52.605897 | orchestrator | Friday 10 April 2026 01:05:56 +0000 (0:00:02.887) 0:00:40.884 ********** 2026-04-10 01:08:52.605903 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:08:52.605909 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:08:52.605916 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:08:52.605922 | orchestrator | 2026-04-10 01:08:52.605928 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-10 01:08:52.605934 | orchestrator | Friday 10 April 2026 01:05:56 +0000 (0:00:00.391) 0:00:41.275 ********** 2026-04-10 01:08:52.605941 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 01:08:52.605945 | orchestrator | 2026-04-10 01:08:52.605949 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-04-10 01:08:52.605953 | orchestrator | Friday 10 April 2026 01:05:57 +0000 (0:00:00.477) 0:00:41.753 ********** 2026-04-10 01:08:52.605957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-10 01:08:52.605962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-10 01:08:52.605969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-10 01:08:52.605977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.605981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.605985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.605989 | orchestrator | 2026-04-10 01:08:52.605992 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-04-10 01:08:52.605996 | orchestrator | Friday 10 April 2026 01:05:59 +0000 (0:00:02.144) 0:00:43.897 ********** 2026-04-10 01:08:52.606000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-10 01:08:52.606007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-10 01:08:52.606035 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.606040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-10 01:08:52.606044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-10 01:08:52.606048 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.606052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-10 01:08:52.606056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-10 01:08:52.606063 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.606067 | orchestrator | 2026-04-10 01:08:52.606071 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-04-10 01:08:52.606075 | orchestrator | Friday 10 April 2026 01:06:00 +0000 (0:00:01.244) 0:00:45.141 ********** 2026-04-10 01:08:52.606079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-10 01:08:52.606087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-10 01:08:52.606092 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.606095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-10 01:08:52.606099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-10 01:08:52.606103 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.606111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-10 01:08:52.606118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-10 01:08:52.606123 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.606130 | orchestrator | 2026-04-10 01:08:52.606136 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-04-10 01:08:52.606142 | orchestrator | Friday 10 April 2026 01:06:01 +0000 (0:00:00.722) 0:00:45.863 ********** 2026-04-10 01:08:52.606149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-10 01:08:52.606156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-10 01:08:52.606163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-10 01:08:52.606173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.606178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.606182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.606186 | orchestrator | 2026-04-10 01:08:52.606190 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-04-10 01:08:52.606193 | orchestrator | Friday 10 April 2026 01:06:03 +0000 (0:00:02.050) 0:00:47.914 ********** 2026-04-10 01:08:52.606197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-10 01:08:52.606202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-10 01:08:52.606211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-10 01:08:52.606216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.606220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.606224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.606228 | orchestrator | 2026-04-10 01:08:52.606231 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-04-10 01:08:52.606235 | orchestrator | Friday 10 April 2026 01:06:09 +0000 (0:00:06.229) 0:00:54.143 ********** 2026-04-10 01:08:52.606242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-10 01:08:52.606250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-10 01:08:52.606254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-10 01:08:52.606258 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.606262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-10 01:08:52.606266 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.606270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-10 01:08:52.606306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-10 01:08:52.606312 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.606316 | orchestrator | 2026-04-10 01:08:52.606321 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-04-10 01:08:52.606328 | orchestrator | Friday 10 April 2026 01:06:10 +0000 (0:00:00.797) 0:00:54.941 ********** 2026-04-10 01:08:52.606337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-10 01:08:52.606345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-10 01:08:52.606351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-10 01:08:52.606358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.606373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.606380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-10 01:08:52.606386 | orchestrator | 2026-04-10 01:08:52.606392 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-10 01:08:52.606398 | orchestrator | Friday 10 April 2026 01:06:12 +0000 (0:00:01.739) 0:00:56.680 ********** 2026-04-10 01:08:52.606405 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:52.606411 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:52.606418 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:52.606424 | orchestrator | 2026-04-10 01:08:52.606431 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-04-10 01:08:52.606437 | orchestrator | Friday 10 April 2026 01:06:12 +0000 (0:00:00.264) 0:00:56.945 ********** 2026-04-10 01:08:52.606444 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:08:52.606450 | orchestrator | 2026-04-10 01:08:52.606456 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-04-10 01:08:52.606460 | orchestrator | Friday 10 April 2026 01:06:14 +0000 (0:00:02.078) 0:00:59.023 ********** 2026-04-10 01:08:52.606464 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:08:52.606468 | orchestrator | 2026-04-10 01:08:52.606471 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-04-10 01:08:52.606475 | orchestrator | Friday 10 April 2026 01:06:16 +0000 (0:00:02.331) 0:01:01.354 ********** 2026-04-10 01:08:52.606479 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:08:52.606483 | orchestrator | 2026-04-10 01:08:52.606486 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-10 01:08:52.606490 | orchestrator | Friday 10 April 2026 01:06:32 +0000 (0:00:15.269) 0:01:16.624 ********** 2026-04-10 01:08:52.606494 | orchestrator | 2026-04-10 01:08:52.606498 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-10 01:08:52.606502 | orchestrator | Friday 10 April 2026 01:06:32 +0000 (0:00:00.351) 0:01:16.976 ********** 2026-04-10 01:08:52.606510 | orchestrator | 2026-04-10 01:08:52.606528 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-10 01:08:52.606533 | orchestrator | Friday 10 April 2026 01:06:32 +0000 (0:00:00.101) 0:01:17.077 ********** 2026-04-10 01:08:52.606536 | orchestrator | 2026-04-10 01:08:52.606540 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-04-10 01:08:52.606544 | orchestrator | Friday 10 April 2026 01:06:32 +0000 (0:00:00.083) 0:01:17.160 ********** 2026-04-10 01:08:52.606548 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:08:52.606552 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:08:52.606555 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:08:52.606559 | orchestrator | 2026-04-10 01:08:52.606563 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-04-10 01:08:52.606567 | orchestrator | Friday 10 April 2026 01:06:42 +0000 (0:00:09.481) 0:01:26.642 ********** 2026-04-10 01:08:52.606571 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:08:52.606575 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:08:52.606578 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:08:52.606582 | orchestrator | 2026-04-10 01:08:52.606586 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 01:08:52.606590 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-10 01:08:52.606595 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-10 01:08:52.606599 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-10 01:08:52.606602 | orchestrator | 2026-04-10 01:08:52.606606 | orchestrator | 2026-04-10 01:08:52.606610 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 01:08:52.606614 | orchestrator | Friday 10 April 2026 01:06:49 +0000 (0:00:07.504) 0:01:34.146 ********** 2026-04-10 01:08:52.606618 | orchestrator | =============================================================================== 2026-04-10 01:08:52.606624 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.27s 2026-04-10 01:08:52.606631 | orchestrator | magnum : Restart magnum-api container ----------------------------------- 9.48s 2026-04-10 01:08:52.606637 | orchestrator | magnum : Restart magnum-conductor container ----------------------------- 7.50s 2026-04-10 01:08:52.606644 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.23s 2026-04-10 01:08:52.606654 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 6.23s 2026-04-10 01:08:52.606660 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.80s 2026-04-10 01:08:52.606664 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.57s 2026-04-10 01:08:52.606668 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.42s 2026-04-10 01:08:52.606672 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.38s 2026-04-10 01:08:52.606676 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.29s 2026-04-10 01:08:52.606679 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.24s 2026-04-10 01:08:52.606683 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.20s 2026-04-10 01:08:52.606687 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.15s 2026-04-10 01:08:52.606691 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.89s 2026-04-10 01:08:52.606694 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.33s 2026-04-10 01:08:52.606698 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.14s 2026-04-10 01:08:52.606702 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.08s 2026-04-10 01:08:52.606709 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.05s 2026-04-10 01:08:52.606713 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.80s 2026-04-10 01:08:52.606717 | orchestrator | magnum : Check magnum containers ---------------------------------------- 1.74s 2026-04-10 01:08:52.606721 | orchestrator | 2026-04-10 01:08:52 | INFO  | Task 85995e83-1c53-463e-8b5b-2179dc11a94c is in state SUCCESS 2026-04-10 01:08:52.606725 | orchestrator | 2026-04-10 01:08:52 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:08:55.643276 | orchestrator | 2026-04-10 01:08:55 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:08:55.647439 | orchestrator | 2026-04-10 01:08:55 | INFO  | Task daad55ef-6d45-40e1-a4e2-a20a9aefe8a2 is in state SUCCESS 2026-04-10 01:08:55.649173 | orchestrator | 2026-04-10 01:08:55.649238 | orchestrator | 2026-04-10 01:08:55.649245 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-10 01:08:55.649250 | orchestrator | 2026-04-10 01:08:55.649255 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-10 01:08:55.649260 | orchestrator | Friday 10 April 2026 01:05:48 +0000 (0:00:00.282) 0:00:00.283 ********** 2026-04-10 01:08:55.649264 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:08:55.649269 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:08:55.649274 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:08:55.649277 | orchestrator | 2026-04-10 01:08:55.649282 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-10 01:08:55.649286 | orchestrator | Friday 10 April 2026 01:05:48 +0000 (0:00:00.264) 0:00:00.547 ********** 2026-04-10 01:08:55.649290 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-04-10 01:08:55.649295 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-04-10 01:08:55.649301 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-04-10 01:08:55.649307 | orchestrator | 2026-04-10 01:08:55.649316 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-04-10 01:08:55.649323 | orchestrator | 2026-04-10 01:08:55.649330 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-10 01:08:55.649336 | orchestrator | Friday 10 April 2026 01:05:49 +0000 (0:00:00.267) 0:00:00.814 ********** 2026-04-10 01:08:55.649343 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 01:08:55.649350 | orchestrator | 2026-04-10 01:08:55.649356 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-04-10 01:08:55.649361 | orchestrator | Friday 10 April 2026 01:05:49 +0000 (0:00:00.731) 0:00:01.545 ********** 2026-04-10 01:08:55.649370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-10 01:08:55.649396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-10 01:08:55.649434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-10 01:08:55.649448 | orchestrator | 2026-04-10 01:08:55.649454 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-04-10 01:08:55.649460 | orchestrator | Friday 10 April 2026 01:05:50 +0000 (0:00:00.942) 0:00:02.488 ********** 2026-04-10 01:08:55.649467 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-04-10 01:08:55.649474 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-04-10 01:08:55.649480 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-10 01:08:55.649487 | orchestrator | 2026-04-10 01:08:55.649493 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-10 01:08:55.649499 | orchestrator | Friday 10 April 2026 01:05:51 +0000 (0:00:00.845) 0:00:03.333 ********** 2026-04-10 01:08:55.649505 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 01:08:55.649511 | orchestrator | 2026-04-10 01:08:55.649600 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-04-10 01:08:55.649605 | orchestrator | Friday 10 April 2026 01:05:52 +0000 (0:00:00.483) 0:00:03.816 ********** 2026-04-10 01:08:55.649628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-10 01:08:55.649636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-10 01:08:55.649643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-10 01:08:55.649649 | orchestrator | 2026-04-10 01:08:55.649668 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-04-10 01:08:55.649674 | orchestrator | Friday 10 April 2026 01:05:53 +0000 (0:00:01.660) 0:00:05.477 ********** 2026-04-10 01:08:55.649681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-10 01:08:55.649688 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:55.649694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-10 01:08:55.649701 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:55.649714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-10 01:08:55.649720 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:55.649726 | orchestrator | 2026-04-10 01:08:55.649732 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-04-10 01:08:55.649738 | orchestrator | Friday 10 April 2026 01:05:54 +0000 (0:00:00.402) 0:00:05.880 ********** 2026-04-10 01:08:55.649744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-10 01:08:55.649750 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:55.649757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-10 01:08:55.649769 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:55.649779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-10 01:08:55.649785 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:55.649791 | orchestrator | 2026-04-10 01:08:55.649797 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-04-10 01:08:55.649803 | orchestrator | Friday 10 April 2026 01:05:54 +0000 (0:00:00.515) 0:00:06.395 ********** 2026-04-10 01:08:55.649809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-10 01:08:55.649816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-10 01:08:55.649828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-10 01:08:55.649834 | orchestrator | 2026-04-10 01:08:55.649840 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-04-10 01:08:55.649846 | orchestrator | Friday 10 April 2026 01:05:56 +0000 (0:00:01.630) 0:00:08.025 ********** 2026-04-10 01:08:55.649852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-10 01:08:55.649866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-10 01:08:55.649872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-10 01:08:55.649878 | orchestrator | 2026-04-10 01:08:55.649885 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-04-10 01:08:55.649891 | orchestrator | Friday 10 April 2026 01:05:57 +0000 (0:00:01.311) 0:00:09.337 ********** 2026-04-10 01:08:55.649897 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:55.649904 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:55.649910 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:55.649917 | orchestrator | 2026-04-10 01:08:55.649922 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-04-10 01:08:55.649928 | orchestrator | Friday 10 April 2026 01:05:57 +0000 (0:00:00.305) 0:00:09.643 ********** 2026-04-10 01:08:55.649934 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-10 01:08:55.650176 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-10 01:08:55.650191 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-10 01:08:55.650197 | orchestrator | 2026-04-10 01:08:55.650203 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-04-10 01:08:55.650210 | orchestrator | Friday 10 April 2026 01:05:59 +0000 (0:00:01.216) 0:00:10.859 ********** 2026-04-10 01:08:55.650216 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-10 01:08:55.650223 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-10 01:08:55.650230 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-10 01:08:55.650236 | orchestrator | 2026-04-10 01:08:55.650242 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-04-10 01:08:55.650248 | orchestrator | Friday 10 April 2026 01:06:00 +0000 (0:00:01.263) 0:00:12.122 ********** 2026-04-10 01:08:55.650264 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-10 01:08:55.650270 | orchestrator | 2026-04-10 01:08:55.650276 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-04-10 01:08:55.650284 | orchestrator | Friday 10 April 2026 01:06:01 +0000 (0:00:00.950) 0:00:13.073 ********** 2026-04-10 01:08:55.650290 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-04-10 01:08:55.650297 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-04-10 01:08:55.650313 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:08:55.650320 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:08:55.650326 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:08:55.650331 | orchestrator | 2026-04-10 01:08:55.650337 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-04-10 01:08:55.650343 | orchestrator | Friday 10 April 2026 01:06:01 +0000 (0:00:00.578) 0:00:13.652 ********** 2026-04-10 01:08:55.650350 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:55.650356 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:55.650362 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:55.650368 | orchestrator | 2026-04-10 01:08:55.650374 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-04-10 01:08:55.650380 | orchestrator | Friday 10 April 2026 01:06:02 +0000 (0:00:00.283) 0:00:13.936 ********** 2026-04-10 01:08:55.650388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1107250, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.9554055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1107250, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.9554055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1107250, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.9554055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1107401, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.9900944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1107401, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.9900944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1107401, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.9900944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1107891, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.12939, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1107891, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.12939, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1107891, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.12939, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1107393, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.98469, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1107393, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.98469, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1107393, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.98469, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1107896, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.1342602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1107896, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.1342602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1107896, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.1342602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1107259, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.9781842, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1107259, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.9781842, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1107259, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.9781842, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1107431, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.9949417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1107431, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.9949417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1107431, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.9949417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1107881, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.124818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1107881, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.124818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1107881, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.124818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1107249, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.9539642, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1107249, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.9539642, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1107249, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.9539642, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1107257, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.9559863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1107257, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.9559863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1107257, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.9559863, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1107397, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.9854946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1107397, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.9854946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1107397, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.9854946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1107872, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.1224031, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1107872, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.1224031, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1107872, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.1224031, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1107887, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.1260774, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1107887, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.1260774, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.650831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1107887, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.1260774, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1107382, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.9833212, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1107382, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.9833212, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1107382, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.9833212, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1107879, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.124178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1107879, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.124178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1107879, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.124178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1107911, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.1360984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1107911, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.1360984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1107911, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.1360984, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1107433, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.1214902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1107433, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.1214902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1107433, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.1214902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1107423, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.9929907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1107423, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.9929907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1107423, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.9929907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1107417, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.9923108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1107417, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.9923108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1107417, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.9923108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1107874, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.1238682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1107874, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.1238682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1107874, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.1238682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1107415, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.9906695, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1107415, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.9906695, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1107415, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.9906695, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1107884, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.1259384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1107884, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.1259384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1107884, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.1259384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1107371, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.979844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1107371, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.979844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1107371, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780213.979844, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1108579, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3546069, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1108579, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3546069, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1108579, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3546069, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1108518, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3120804, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1108518, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3120804, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1108518, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3120804, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1107931, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.1391385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1107931, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.1391385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1107931, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.1391385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1108530, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.315566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1108530, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.315566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1108530, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.315566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1107920, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.136873, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1107920, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.136873, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1108546, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.328642, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1107920, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.136873, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1108546, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.328642, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1108532, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3230805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1108546, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.328642, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1108532, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3230805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1108553, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.330491, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1108532, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3230805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1108553, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.330491, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1108572, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3530152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1108572, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3530152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1108553, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.330491, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1108545, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3250804, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1108545, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3250804, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1108572, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3530152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1108525, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3145463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1108525, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3145463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1108545, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3250804, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1108517, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3090804, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1108517, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3090804, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1108525, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3145463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1108523, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.313606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1108523, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.313606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1108517, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3090804, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1107934, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3080802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1108523, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.313606, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1107934, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3080802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1108529, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3149264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1108529, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3149264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1107934, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3080802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1108561, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3511355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1108529, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3149264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1108559, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3340807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1108561, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3511355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1108561, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3511355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1107924, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.137437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isr2026-04-10 01:08:55 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:08:55.651843 | orchestrator | eg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1108559, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3340807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1107929, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.1387079, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1108559, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3340807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1107924, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.137437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1108543, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3240805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1107924, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.137437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1107929, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.1387079, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1108557, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3310807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1107929, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.1387079, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1108543, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3240805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1108543, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3240805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1108557, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3310807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1108557, 'dev': 144, 'nlink': 1, 'atime': 1775779350.0, 'mtime': 1775779350.0, 'ctime': 1775780214.3310807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-10 01:08:55.651967 | orchestrator | 2026-04-10 01:08:55.651975 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-04-10 01:08:55.651983 | orchestrator | Friday 10 April 2026 01:06:40 +0000 (0:00:38.275) 0:00:52.211 ********** 2026-04-10 01:08:55.651990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-10 01:08:55.652001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-10 01:08:55.652009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-10 01:08:55.652016 | orchestrator | 2026-04-10 01:08:55.652023 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-04-10 01:08:55.652029 | orchestrator | Friday 10 April 2026 01:06:41 +0000 (0:00:01.120) 0:00:53.331 ********** 2026-04-10 01:08:55.652035 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:08:55.652042 | orchestrator | 2026-04-10 01:08:55.652048 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-04-10 01:08:55.652055 | orchestrator | Friday 10 April 2026 01:06:43 +0000 (0:00:02.054) 0:00:55.386 ********** 2026-04-10 01:08:55.652061 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:08:55.652066 | orchestrator | 2026-04-10 01:08:55.652073 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-10 01:08:55.652078 | orchestrator | Friday 10 April 2026 01:06:46 +0000 (0:00:02.396) 0:00:57.782 ********** 2026-04-10 01:08:55.652084 | orchestrator | 2026-04-10 01:08:55.652090 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-10 01:08:55.652096 | orchestrator | Friday 10 April 2026 01:06:46 +0000 (0:00:00.065) 0:00:57.847 ********** 2026-04-10 01:08:55.652108 | orchestrator | 2026-04-10 01:08:55.652115 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-10 01:08:55.652121 | orchestrator | Friday 10 April 2026 01:06:46 +0000 (0:00:00.068) 0:00:57.916 ********** 2026-04-10 01:08:55.652128 | orchestrator | 2026-04-10 01:08:55.652134 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-04-10 01:08:55.652145 | orchestrator | Friday 10 April 2026 01:06:46 +0000 (0:00:00.068) 0:00:57.985 ********** 2026-04-10 01:08:55.652153 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:55.652157 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:55.652161 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:08:55.652165 | orchestrator | 2026-04-10 01:08:55.652169 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-04-10 01:08:55.652172 | orchestrator | Friday 10 April 2026 01:06:48 +0000 (0:00:01.877) 0:00:59.862 ********** 2026-04-10 01:08:55.652176 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:55.652180 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:55.652184 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-04-10 01:08:55.652189 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-04-10 01:08:55.652193 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:08:55.652197 | orchestrator | 2026-04-10 01:08:55.652201 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-04-10 01:08:55.652205 | orchestrator | Friday 10 April 2026 01:07:15 +0000 (0:00:27.018) 0:01:26.881 ********** 2026-04-10 01:08:55.652209 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:55.652213 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:08:55.652217 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:08:55.652221 | orchestrator | 2026-04-10 01:08:55.652225 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-04-10 01:08:55.652229 | orchestrator | Friday 10 April 2026 01:07:40 +0000 (0:00:25.764) 0:01:52.646 ********** 2026-04-10 01:08:55.652233 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:08:55.652237 | orchestrator | 2026-04-10 01:08:55.652243 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-04-10 01:08:55.652249 | orchestrator | Friday 10 April 2026 01:07:43 +0000 (0:00:02.295) 0:01:54.942 ********** 2026-04-10 01:08:55.652256 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:55.652260 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:08:55.652264 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:08:55.652268 | orchestrator | 2026-04-10 01:08:55.652271 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-04-10 01:08:55.652275 | orchestrator | Friday 10 April 2026 01:07:43 +0000 (0:00:00.395) 0:01:55.337 ********** 2026-04-10 01:08:55.652281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-04-10 01:08:55.652291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-04-10 01:08:55.652296 | orchestrator | 2026-04-10 01:08:55.652300 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-04-10 01:08:55.652304 | orchestrator | Friday 10 April 2026 01:07:46 +0000 (0:00:02.524) 0:01:57.861 ********** 2026-04-10 01:08:55.652308 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:08:55.652312 | orchestrator | 2026-04-10 01:08:55.652316 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 01:08:55.652326 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-10 01:08:55.652330 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-10 01:08:55.652335 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-10 01:08:55.652341 | orchestrator | 2026-04-10 01:08:55.652346 | orchestrator | 2026-04-10 01:08:55.652352 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 01:08:55.652358 | orchestrator | Friday 10 April 2026 01:07:46 +0000 (0:00:00.284) 0:01:58.145 ********** 2026-04-10 01:08:55.652363 | orchestrator | =============================================================================== 2026-04-10 01:08:55.652369 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.28s 2026-04-10 01:08:55.652375 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 27.02s 2026-04-10 01:08:55.652380 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 25.76s 2026-04-10 01:08:55.652386 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.52s 2026-04-10 01:08:55.652392 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.40s 2026-04-10 01:08:55.652398 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.30s 2026-04-10 01:08:55.652402 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.05s 2026-04-10 01:08:55.652406 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.88s 2026-04-10 01:08:55.652410 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.66s 2026-04-10 01:08:55.652414 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.63s 2026-04-10 01:08:55.652418 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.31s 2026-04-10 01:08:55.652425 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.26s 2026-04-10 01:08:55.652430 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.22s 2026-04-10 01:08:55.652433 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.12s 2026-04-10 01:08:55.652437 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.95s 2026-04-10 01:08:55.652441 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.94s 2026-04-10 01:08:55.652445 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.85s 2026-04-10 01:08:55.652448 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.73s 2026-04-10 01:08:55.652452 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.58s 2026-04-10 01:08:55.652456 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.52s 2026-04-10 01:08:58.686887 | orchestrator | 2026-04-10 01:08:58 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:08:58.686947 | orchestrator | 2026-04-10 01:08:58 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:09:01.735239 | orchestrator | 2026-04-10 01:09:01 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:09:01.735327 | orchestrator | 2026-04-10 01:09:01 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:09:04.773734 | orchestrator | 2026-04-10 01:09:04 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:09:04.773805 | orchestrator | 2026-04-10 01:09:04 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:09:07.819824 | orchestrator | 2026-04-10 01:09:07 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:09:07.819918 | orchestrator | 2026-04-10 01:09:07 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:09:10.862358 | orchestrator | 2026-04-10 01:09:10 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:09:10.862432 | orchestrator | 2026-04-10 01:09:10 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:09:13.908495 | orchestrator | 2026-04-10 01:09:13 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:09:13.908875 | orchestrator | 2026-04-10 01:09:13 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:09:16.951027 | orchestrator | 2026-04-10 01:09:16 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:09:16.951116 | orchestrator | 2026-04-10 01:09:16 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:09:19.997631 | orchestrator | 2026-04-10 01:09:19 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:09:19.997704 | orchestrator | 2026-04-10 01:09:19 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:09:23.044150 | orchestrator | 2026-04-10 01:09:23 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:09:23.046184 | orchestrator | 2026-04-10 01:09:23 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:09:26.085268 | orchestrator | 2026-04-10 01:09:26 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:09:26.085344 | orchestrator | 2026-04-10 01:09:26 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:09:29.131043 | orchestrator | 2026-04-10 01:09:29 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:09:29.131132 | orchestrator | 2026-04-10 01:09:29 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:09:32.175518 | orchestrator | 2026-04-10 01:09:32 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:09:32.175671 | orchestrator | 2026-04-10 01:09:32 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:09:35.218319 | orchestrator | 2026-04-10 01:09:35 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:09:35.219259 | orchestrator | 2026-04-10 01:09:35 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:09:38.259406 | orchestrator | 2026-04-10 01:09:38 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:09:38.259485 | orchestrator | 2026-04-10 01:09:38 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:09:41.301124 | orchestrator | 2026-04-10 01:09:41 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:09:41.301203 | orchestrator | 2026-04-10 01:09:41 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:09:44.340319 | orchestrator | 2026-04-10 01:09:44 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:09:44.340389 | orchestrator | 2026-04-10 01:09:44 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:09:47.378174 | orchestrator | 2026-04-10 01:09:47 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:09:47.378249 | orchestrator | 2026-04-10 01:09:47 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:09:50.423257 | orchestrator | 2026-04-10 01:09:50 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:09:50.423314 | orchestrator | 2026-04-10 01:09:50 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:09:53.468807 | orchestrator | 2026-04-10 01:09:53 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:09:53.468881 | orchestrator | 2026-04-10 01:09:53 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:09:56.509449 | orchestrator | 2026-04-10 01:09:56 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:09:56.509617 | orchestrator | 2026-04-10 01:09:56 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:09:59.556595 | orchestrator | 2026-04-10 01:09:59 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:09:59.556697 | orchestrator | 2026-04-10 01:09:59 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:10:02.600738 | orchestrator | 2026-04-10 01:10:02 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:10:02.600831 | orchestrator | 2026-04-10 01:10:02 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:10:05.641337 | orchestrator | 2026-04-10 01:10:05 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:10:05.641404 | orchestrator | 2026-04-10 01:10:05 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:10:08.686789 | orchestrator | 2026-04-10 01:10:08 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:10:08.686880 | orchestrator | 2026-04-10 01:10:08 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:10:11.729061 | orchestrator | 2026-04-10 01:10:11 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:10:11.729123 | orchestrator | 2026-04-10 01:10:11 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:10:14.781677 | orchestrator | 2026-04-10 01:10:14 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:10:14.781722 | orchestrator | 2026-04-10 01:10:14 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:10:17.829314 | orchestrator | 2026-04-10 01:10:17 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:10:17.829393 | orchestrator | 2026-04-10 01:10:17 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:10:20.869408 | orchestrator | 2026-04-10 01:10:20 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:10:20.869504 | orchestrator | 2026-04-10 01:10:20 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:10:23.915173 | orchestrator | 2026-04-10 01:10:23 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:10:23.915271 | orchestrator | 2026-04-10 01:10:23 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:10:26.975273 | orchestrator | 2026-04-10 01:10:26 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:10:26.975321 | orchestrator | 2026-04-10 01:10:26 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:10:30.028820 | orchestrator | 2026-04-10 01:10:30 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:10:30.028885 | orchestrator | 2026-04-10 01:10:30 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:10:33.089226 | orchestrator | 2026-04-10 01:10:33 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:10:33.089290 | orchestrator | 2026-04-10 01:10:33 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:10:36.141012 | orchestrator | 2026-04-10 01:10:36 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:10:36.141112 | orchestrator | 2026-04-10 01:10:36 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:10:39.188402 | orchestrator | 2026-04-10 01:10:39 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state STARTED 2026-04-10 01:10:39.188518 | orchestrator | 2026-04-10 01:10:39 | INFO  | Wait 1 second(s) until the next check 2026-04-10 01:10:42.237860 | orchestrator | 2026-04-10 01:10:42 | INFO  | Task df3b5ea9-8be3-4d61-967b-c249f665cb09 is in state SUCCESS 2026-04-10 01:10:42.237936 | orchestrator | 2026-04-10 01:10:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-10 01:10:42.239299 | orchestrator | 2026-04-10 01:10:42.239325 | orchestrator | 2026-04-10 01:10:42.239330 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-10 01:10:42.239334 | orchestrator | 2026-04-10 01:10:42.239338 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-10 01:10:42.239342 | orchestrator | Friday 10 April 2026 01:05:59 +0000 (0:00:00.377) 0:00:00.377 ********** 2026-04-10 01:10:42.239346 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:10:42.239351 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:10:42.239354 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:10:42.239358 | orchestrator | 2026-04-10 01:10:42.239362 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-10 01:10:42.239366 | orchestrator | Friday 10 April 2026 01:06:00 +0000 (0:00:00.407) 0:00:00.784 ********** 2026-04-10 01:10:42.239370 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-04-10 01:10:42.239374 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-04-10 01:10:42.239378 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-04-10 01:10:42.239381 | orchestrator | 2026-04-10 01:10:42.239385 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-04-10 01:10:42.239389 | orchestrator | 2026-04-10 01:10:42.239393 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-10 01:10:42.239396 | orchestrator | Friday 10 April 2026 01:06:00 +0000 (0:00:00.386) 0:00:01.171 ********** 2026-04-10 01:10:42.239400 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 01:10:42.239404 | orchestrator | 2026-04-10 01:10:42.239408 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-04-10 01:10:42.239412 | orchestrator | Friday 10 April 2026 01:06:01 +0000 (0:00:00.680) 0:00:01.852 ********** 2026-04-10 01:10:42.239416 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-04-10 01:10:42.239420 | orchestrator | 2026-04-10 01:10:42.239424 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-04-10 01:10:42.239428 | orchestrator | Friday 10 April 2026 01:06:04 +0000 (0:00:03.323) 0:00:05.175 ********** 2026-04-10 01:10:42.239432 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-04-10 01:10:42.239436 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-04-10 01:10:42.239440 | orchestrator | 2026-04-10 01:10:42.239444 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-04-10 01:10:42.239448 | orchestrator | Friday 10 April 2026 01:06:10 +0000 (0:00:06.247) 0:00:11.423 ********** 2026-04-10 01:10:42.239452 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-10 01:10:42.239455 | orchestrator | 2026-04-10 01:10:42.239467 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-04-10 01:10:42.239471 | orchestrator | Friday 10 April 2026 01:06:14 +0000 (0:00:03.220) 0:00:14.644 ********** 2026-04-10 01:10:42.239475 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-10 01:10:42.239479 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-10 01:10:42.239483 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-10 01:10:42.239486 | orchestrator | 2026-04-10 01:10:42.239490 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-04-10 01:10:42.239494 | orchestrator | Friday 10 April 2026 01:06:22 +0000 (0:00:08.085) 0:00:22.729 ********** 2026-04-10 01:10:42.239508 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-10 01:10:42.239512 | orchestrator | 2026-04-10 01:10:42.239516 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-04-10 01:10:42.239520 | orchestrator | Friday 10 April 2026 01:06:25 +0000 (0:00:03.001) 0:00:25.730 ********** 2026-04-10 01:10:42.239536 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-10 01:10:42.239540 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-10 01:10:42.239544 | orchestrator | 2026-04-10 01:10:42.239548 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-04-10 01:10:42.239552 | orchestrator | Friday 10 April 2026 01:06:33 +0000 (0:00:08.055) 0:00:33.786 ********** 2026-04-10 01:10:42.239555 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-04-10 01:10:42.239559 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-04-10 01:10:42.239563 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-04-10 01:10:42.239567 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-04-10 01:10:42.239570 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-04-10 01:10:42.239574 | orchestrator | 2026-04-10 01:10:42.239578 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-10 01:10:42.239582 | orchestrator | Friday 10 April 2026 01:06:47 +0000 (0:00:14.069) 0:00:47.856 ********** 2026-04-10 01:10:42.239585 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 01:10:42.239589 | orchestrator | 2026-04-10 01:10:42.239593 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-04-10 01:10:42.239597 | orchestrator | Friday 10 April 2026 01:06:48 +0000 (0:00:00.754) 0:00:48.611 ********** 2026-04-10 01:10:42.239601 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:10:42.239605 | orchestrator | 2026-04-10 01:10:42.239608 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-04-10 01:10:42.239612 | orchestrator | Friday 10 April 2026 01:06:52 +0000 (0:00:04.511) 0:00:53.122 ********** 2026-04-10 01:10:42.239616 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:10:42.239620 | orchestrator | 2026-04-10 01:10:42.239623 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-10 01:10:42.239633 | orchestrator | Friday 10 April 2026 01:06:57 +0000 (0:00:04.482) 0:00:57.605 ********** 2026-04-10 01:10:42.239637 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:10:42.239641 | orchestrator | 2026-04-10 01:10:42.239645 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-04-10 01:10:42.239649 | orchestrator | Friday 10 April 2026 01:07:00 +0000 (0:00:03.340) 0:01:00.945 ********** 2026-04-10 01:10:42.239653 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-10 01:10:42.239656 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-10 01:10:42.239660 | orchestrator | 2026-04-10 01:10:42.239664 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-04-10 01:10:42.239668 | orchestrator | Friday 10 April 2026 01:07:11 +0000 (0:00:10.828) 0:01:11.774 ********** 2026-04-10 01:10:42.239672 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-04-10 01:10:42.239676 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-04-10 01:10:42.239680 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-04-10 01:10:42.239684 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-04-10 01:10:42.239688 | orchestrator | 2026-04-10 01:10:42.239692 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-04-10 01:10:42.239700 | orchestrator | Friday 10 April 2026 01:07:25 +0000 (0:00:14.426) 0:01:26.201 ********** 2026-04-10 01:10:42.239739 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:10:42.239745 | orchestrator | 2026-04-10 01:10:42.239749 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-04-10 01:10:42.239753 | orchestrator | Friday 10 April 2026 01:07:31 +0000 (0:00:05.555) 0:01:31.757 ********** 2026-04-10 01:10:42.239757 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:10:42.239760 | orchestrator | 2026-04-10 01:10:42.239764 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-04-10 01:10:42.239768 | orchestrator | Friday 10 April 2026 01:07:36 +0000 (0:00:05.420) 0:01:37.177 ********** 2026-04-10 01:10:42.239771 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:10:42.239775 | orchestrator | 2026-04-10 01:10:42.239779 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-04-10 01:10:42.239783 | orchestrator | Friday 10 April 2026 01:07:37 +0000 (0:00:00.636) 0:01:37.814 ********** 2026-04-10 01:10:42.239786 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:10:42.239790 | orchestrator | 2026-04-10 01:10:42.239796 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-10 01:10:42.239835 | orchestrator | Friday 10 April 2026 01:07:42 +0000 (0:00:04.902) 0:01:42.716 ********** 2026-04-10 01:10:42.239952 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-1, testbed-node-0, testbed-node-2 2026-04-10 01:10:42.239957 | orchestrator | 2026-04-10 01:10:42.239961 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-04-10 01:10:42.239965 | orchestrator | Friday 10 April 2026 01:07:43 +0000 (0:00:01.576) 0:01:44.292 ********** 2026-04-10 01:10:42.239969 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:10:42.239973 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:10:42.239977 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:10:42.239980 | orchestrator | 2026-04-10 01:10:42.239984 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-04-10 01:10:42.239988 | orchestrator | Friday 10 April 2026 01:07:48 +0000 (0:00:05.240) 0:01:49.533 ********** 2026-04-10 01:10:42.239992 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:10:42.239996 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:10:42.240000 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:10:42.240004 | orchestrator | 2026-04-10 01:10:42.240007 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-04-10 01:10:42.240011 | orchestrator | Friday 10 April 2026 01:07:54 +0000 (0:00:05.493) 0:01:55.026 ********** 2026-04-10 01:10:42.240015 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:10:42.240019 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:10:42.240023 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:10:42.240026 | orchestrator | 2026-04-10 01:10:42.240030 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-04-10 01:10:42.240034 | orchestrator | Friday 10 April 2026 01:07:55 +0000 (0:00:00.826) 0:01:55.853 ********** 2026-04-10 01:10:42.240038 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:10:42.240042 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:10:42.240045 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:10:42.240049 | orchestrator | 2026-04-10 01:10:42.240053 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-04-10 01:10:42.240057 | orchestrator | Friday 10 April 2026 01:07:56 +0000 (0:00:01.656) 0:01:57.509 ********** 2026-04-10 01:10:42.240061 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:10:42.240064 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:10:42.240068 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:10:42.240072 | orchestrator | 2026-04-10 01:10:42.240076 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-04-10 01:10:42.240080 | orchestrator | Friday 10 April 2026 01:07:58 +0000 (0:00:01.219) 0:01:58.729 ********** 2026-04-10 01:10:42.240083 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:10:42.240091 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:10:42.240095 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:10:42.240098 | orchestrator | 2026-04-10 01:10:42.240102 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-04-10 01:10:42.240106 | orchestrator | Friday 10 April 2026 01:07:59 +0000 (0:00:01.130) 0:01:59.859 ********** 2026-04-10 01:10:42.240110 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:10:42.240114 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:10:42.240118 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:10:42.240121 | orchestrator | 2026-04-10 01:10:42.240128 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-04-10 01:10:42.240132 | orchestrator | Friday 10 April 2026 01:08:01 +0000 (0:00:02.245) 0:02:02.104 ********** 2026-04-10 01:10:42.240136 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:10:42.240140 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:10:42.240144 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:10:42.240147 | orchestrator | 2026-04-10 01:10:42.240151 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-04-10 01:10:42.240155 | orchestrator | Friday 10 April 2026 01:08:02 +0000 (0:00:01.460) 0:02:03.564 ********** 2026-04-10 01:10:42.240159 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:10:42.240163 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:10:42.240166 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:10:42.240170 | orchestrator | 2026-04-10 01:10:42.240174 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-04-10 01:10:42.240178 | orchestrator | Friday 10 April 2026 01:08:03 +0000 (0:00:00.536) 0:02:04.101 ********** 2026-04-10 01:10:42.240182 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:10:42.240185 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:10:42.240189 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:10:42.240193 | orchestrator | 2026-04-10 01:10:42.240197 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-10 01:10:42.240201 | orchestrator | Friday 10 April 2026 01:08:05 +0000 (0:00:02.414) 0:02:06.516 ********** 2026-04-10 01:10:42.240204 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 01:10:42.240208 | orchestrator | 2026-04-10 01:10:42.240212 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-04-10 01:10:42.240216 | orchestrator | Friday 10 April 2026 01:08:06 +0000 (0:00:00.588) 0:02:07.104 ********** 2026-04-10 01:10:42.240220 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:10:42.240224 | orchestrator | 2026-04-10 01:10:42.240227 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-10 01:10:42.240231 | orchestrator | Friday 10 April 2026 01:08:10 +0000 (0:00:03.851) 0:02:10.955 ********** 2026-04-10 01:10:42.240235 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:10:42.240239 | orchestrator | 2026-04-10 01:10:42.240242 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-04-10 01:10:42.240246 | orchestrator | Friday 10 April 2026 01:08:14 +0000 (0:00:03.832) 0:02:14.788 ********** 2026-04-10 01:10:42.240250 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-10 01:10:42.240254 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-10 01:10:42.240258 | orchestrator | 2026-04-10 01:10:42.240286 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-04-10 01:10:42.240291 | orchestrator | Friday 10 April 2026 01:08:21 +0000 (0:00:07.566) 0:02:22.354 ********** 2026-04-10 01:10:42.240298 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:10:42.240302 | orchestrator | 2026-04-10 01:10:42.240307 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-04-10 01:10:42.240313 | orchestrator | Friday 10 April 2026 01:08:25 +0000 (0:00:03.540) 0:02:25.895 ********** 2026-04-10 01:10:42.240319 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:10:42.240328 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:10:42.240340 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:10:42.240346 | orchestrator | 2026-04-10 01:10:42.240352 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-04-10 01:10:42.240358 | orchestrator | Friday 10 April 2026 01:08:25 +0000 (0:00:00.312) 0:02:26.207 ********** 2026-04-10 01:10:42.240490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-10 01:10:42.240516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-10 01:10:42.240548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-10 01:10:42.240554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-10 01:10:42.240563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-10 01:10:42.240572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-10 01:10:42.240576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-10 01:10:42.240581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-10 01:10:42.240597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-10 01:10:42.240601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-10 01:10:42.240605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-10 01:10:42.240611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-10 01:10:42.240619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:10:42.240623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:10:42.240627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:10:42.240631 | orchestrator | 2026-04-10 01:10:42.240635 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-04-10 01:10:42.240639 | orchestrator | Friday 10 April 2026 01:08:28 +0000 (0:00:02.764) 0:02:28.971 ********** 2026-04-10 01:10:42.240643 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:10:42.240647 | orchestrator | 2026-04-10 01:10:42.240660 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-04-10 01:10:42.240664 | orchestrator | Friday 10 April 2026 01:08:28 +0000 (0:00:00.134) 0:02:29.106 ********** 2026-04-10 01:10:42.240668 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:10:42.240672 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:10:42.240676 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:10:42.240679 | orchestrator | 2026-04-10 01:10:42.240683 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-04-10 01:10:42.240687 | orchestrator | Friday 10 April 2026 01:08:28 +0000 (0:00:00.323) 0:02:29.429 ********** 2026-04-10 01:10:42.240691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-10 01:10:42.240698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-10 01:10:42.240704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-10 01:10:42.240708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-10 01:10:42.240712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-10 01:10:42.240716 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:10:42.240730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-10 01:10:42.240735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-10 01:10:42.240741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-10 01:10:42.240749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-10 01:10:42.240754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-10 01:10:42.240757 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:10:42.240762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-10 01:10:42.240775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-10 01:10:42.240779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-10 01:10:42.240786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-10 01:10:42.240793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-10 01:10:42.240797 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:10:42.240801 | orchestrator | 2026-04-10 01:10:42.240804 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-10 01:10:42.240808 | orchestrator | Friday 10 April 2026 01:08:29 +0000 (0:00:00.708) 0:02:30.138 ********** 2026-04-10 01:10:42.240812 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 01:10:42.240816 | orchestrator | 2026-04-10 01:10:42.240820 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-04-10 01:10:42.240824 | orchestrator | Friday 10 April 2026 01:08:30 +0000 (0:00:00.768) 0:02:30.907 ********** 2026-04-10 01:10:42.240828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-10 01:10:42.240841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-10 01:10:42.240845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-10 01:10:42.240853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-10 01:10:42.240859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-10 01:10:42.240863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-10 01:10:42.240867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-10 01:10:42.240871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-10 01:10:42.240877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-10 01:10:42.240884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-10 01:10:42.240888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-10 01:10:42.240894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-10 01:10:42.240898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:10:42.240902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:10:42.240909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:10:42.240913 | orchestrator | 2026-04-10 01:10:42.240917 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-04-10 01:10:42.240923 | orchestrator | Friday 10 April 2026 01:08:35 +0000 (0:00:05.032) 0:02:35.940 ********** 2026-04-10 01:10:42.240927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-10 01:10:42.240931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-10 01:10:42.240937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-10 01:10:42.240942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-10 01:10:42.240946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-10 01:10:42.240950 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:10:42.240957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-10 01:10:42.240963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-10 01:10:42.240967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-10 01:10:42.240973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-10 01:10:42.240977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-10 01:10:42.240981 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:10:42.240985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-10 01:10:42.240989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-10 01:10:42.240998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-10 01:10:42.241002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-10 01:10:42.241006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-10 01:10:42.241010 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:10:42.241014 | orchestrator | 2026-04-10 01:10:42.241020 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-04-10 01:10:42.241024 | orchestrator | Friday 10 April 2026 01:08:36 +0000 (0:00:00.673) 0:02:36.613 ********** 2026-04-10 01:10:42.241028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-10 01:10:42.241032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-10 01:10:42.241038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-10 01:10:42.241045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-10 01:10:42.241049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-10 01:10:42.241053 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:10:42.241058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-10 01:10:42.241062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-10 01:10:42.241067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-10 01:10:42.241074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-10 01:10:42.241081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-10 01:10:42.241085 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:10:42.241089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-10 01:10:42.241093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-10 01:10:42.241099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-10 01:10:42.241103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-10 01:10:42.241109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-10 01:10:42.241113 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:10:42.241117 | orchestrator | 2026-04-10 01:10:42.241121 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-04-10 01:10:42.241124 | orchestrator | Friday 10 April 2026 01:08:37 +0000 (0:00:01.057) 0:02:37.671 ********** 2026-04-10 01:10:42.241132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-10 01:10:42.241137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-10 01:10:42.241144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-10 01:10:42.241149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-10 01:10:42.241156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-10 01:10:42.241160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-10 01:10:42.241167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-10 01:10:42.241172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-10 01:10:42.241176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-10 01:10:42.241182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-10 01:10:42.241187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-10 01:10:42.241194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-10 01:10:42.241201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:10:42.241205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:10:42.241210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:10:42.241214 | orchestrator | 2026-04-10 01:10:42.241219 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-04-10 01:10:42.241223 | orchestrator | Friday 10 April 2026 01:08:42 +0000 (0:00:05.233) 0:02:42.904 ********** 2026-04-10 01:10:42.241228 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-10 01:10:42.241232 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-10 01:10:42.241237 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-10 01:10:42.241243 | orchestrator | 2026-04-10 01:10:42.241248 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-04-10 01:10:42.241254 | orchestrator | Friday 10 April 2026 01:08:43 +0000 (0:00:01.554) 0:02:44.459 ********** 2026-04-10 01:10:42.241259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-10 01:10:42.241266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-10 01:10:42.241274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-10 01:10:42.241279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-10 01:10:42.241284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-10 01:10:42.241290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-10 01:10:42.241297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-10 01:10:42.241302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-10 01:10:42.241306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-10 01:10:42.241313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-10 01:10:42.241317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-10 01:10:42.241321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-10 01:10:42.241327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:10:42.241333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:10:42.241338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:10:42.241341 | orchestrator | 2026-04-10 01:10:42.241345 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-04-10 01:10:42.241349 | orchestrator | Friday 10 April 2026 01:09:00 +0000 (0:00:17.112) 0:03:01.571 ********** 2026-04-10 01:10:42.241353 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:10:42.241357 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:10:42.241361 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:10:42.241364 | orchestrator | 2026-04-10 01:10:42.241368 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-04-10 01:10:42.241372 | orchestrator | Friday 10 April 2026 01:09:02 +0000 (0:00:01.984) 0:03:03.556 ********** 2026-04-10 01:10:42.241376 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-10 01:10:42.241380 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-10 01:10:42.241385 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-10 01:10:42.241389 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-10 01:10:42.241393 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-10 01:10:42.241397 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-10 01:10:42.241400 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-10 01:10:42.241404 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-10 01:10:42.241408 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-10 01:10:42.241412 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-10 01:10:42.241416 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-10 01:10:42.241419 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-10 01:10:42.241423 | orchestrator | 2026-04-10 01:10:42.241427 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-04-10 01:10:42.241431 | orchestrator | Friday 10 April 2026 01:09:08 +0000 (0:00:05.280) 0:03:08.836 ********** 2026-04-10 01:10:42.241435 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-10 01:10:42.241438 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-10 01:10:42.241442 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-10 01:10:42.241448 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-10 01:10:42.241452 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-10 01:10:42.241456 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-10 01:10:42.241459 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-10 01:10:42.241463 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-10 01:10:42.241467 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-10 01:10:42.241471 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-10 01:10:42.241474 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-10 01:10:42.241478 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-10 01:10:42.241482 | orchestrator | 2026-04-10 01:10:42.241486 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-04-10 01:10:42.241489 | orchestrator | Friday 10 April 2026 01:09:13 +0000 (0:00:05.263) 0:03:14.100 ********** 2026-04-10 01:10:42.241493 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-10 01:10:42.241499 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-10 01:10:42.241503 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-10 01:10:42.241506 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-10 01:10:42.241510 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-10 01:10:42.241514 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-10 01:10:42.241518 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-10 01:10:42.241555 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-10 01:10:42.241559 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-10 01:10:42.241563 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-10 01:10:42.241567 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-10 01:10:42.241571 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-10 01:10:42.241574 | orchestrator | 2026-04-10 01:10:42.241578 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-04-10 01:10:42.241582 | orchestrator | Friday 10 April 2026 01:09:19 +0000 (0:00:05.529) 0:03:19.630 ********** 2026-04-10 01:10:42.241586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-10 01:10:42.241593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-10 01:10:42.241601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-10 01:10:42.241607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-10 01:10:42.241611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-10 01:10:42.241615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-10 01:10:42.241619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-10 01:10:42.241625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-10 01:10:42.241632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-10 01:10:42.241636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-10 01:10:42.241642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-10 01:10:42.241646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-10 01:10:42.241650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:10:42.241654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:10:42.241661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-10 01:10:42.241668 | orchestrator | 2026-04-10 01:10:42.241672 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-10 01:10:42.241678 | orchestrator | Friday 10 April 2026 01:09:23 +0000 (0:00:04.077) 0:03:23.707 ********** 2026-04-10 01:10:42.241685 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:10:42.241692 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:10:42.241700 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:10:42.241709 | orchestrator | 2026-04-10 01:10:42.241716 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-04-10 01:10:42.241722 | orchestrator | Friday 10 April 2026 01:09:23 +0000 (0:00:00.559) 0:03:24.267 ********** 2026-04-10 01:10:42.241727 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:10:42.241734 | orchestrator | 2026-04-10 01:10:42.241740 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-04-10 01:10:42.241745 | orchestrator | Friday 10 April 2026 01:09:26 +0000 (0:00:02.478) 0:03:26.746 ********** 2026-04-10 01:10:42.241751 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:10:42.241757 | orchestrator | 2026-04-10 01:10:42.241764 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-04-10 01:10:42.241770 | orchestrator | Friday 10 April 2026 01:09:28 +0000 (0:00:02.600) 0:03:29.347 ********** 2026-04-10 01:10:42.241777 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:10:42.241783 | orchestrator | 2026-04-10 01:10:42.241789 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-04-10 01:10:42.241795 | orchestrator | Friday 10 April 2026 01:09:31 +0000 (0:00:02.680) 0:03:32.028 ********** 2026-04-10 01:10:42.241802 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:10:42.241807 | orchestrator | 2026-04-10 01:10:42.241811 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-04-10 01:10:42.241815 | orchestrator | Friday 10 April 2026 01:09:33 +0000 (0:00:02.283) 0:03:34.311 ********** 2026-04-10 01:10:42.241819 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:10:42.241822 | orchestrator | 2026-04-10 01:10:42.241826 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-10 01:10:42.241830 | orchestrator | Friday 10 April 2026 01:09:55 +0000 (0:00:22.165) 0:03:56.476 ********** 2026-04-10 01:10:42.241833 | orchestrator | 2026-04-10 01:10:42.241837 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-10 01:10:42.241841 | orchestrator | Friday 10 April 2026 01:09:55 +0000 (0:00:00.063) 0:03:56.540 ********** 2026-04-10 01:10:42.241845 | orchestrator | 2026-04-10 01:10:42.241848 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-10 01:10:42.241855 | orchestrator | Friday 10 April 2026 01:09:56 +0000 (0:00:00.068) 0:03:56.608 ********** 2026-04-10 01:10:42.241859 | orchestrator | 2026-04-10 01:10:42.241862 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-04-10 01:10:42.241866 | orchestrator | Friday 10 April 2026 01:09:56 +0000 (0:00:00.071) 0:03:56.679 ********** 2026-04-10 01:10:42.241870 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:10:42.241874 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:10:42.241880 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:10:42.241887 | orchestrator | 2026-04-10 01:10:42.241896 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-04-10 01:10:42.241902 | orchestrator | Friday 10 April 2026 01:10:06 +0000 (0:00:10.353) 0:04:07.032 ********** 2026-04-10 01:10:42.241908 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:10:42.241913 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:10:42.241919 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:10:42.241929 | orchestrator | 2026-04-10 01:10:42.241934 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-04-10 01:10:42.241940 | orchestrator | Friday 10 April 2026 01:10:18 +0000 (0:00:11.629) 0:04:18.662 ********** 2026-04-10 01:10:42.241946 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:10:42.241952 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:10:42.241958 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:10:42.241965 | orchestrator | 2026-04-10 01:10:42.241971 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-04-10 01:10:42.241978 | orchestrator | Friday 10 April 2026 01:10:23 +0000 (0:00:05.187) 0:04:23.850 ********** 2026-04-10 01:10:42.241982 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:10:42.241986 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:10:42.241990 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:10:42.241993 | orchestrator | 2026-04-10 01:10:42.241997 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-04-10 01:10:42.242001 | orchestrator | Friday 10 April 2026 01:10:33 +0000 (0:00:10.234) 0:04:34.085 ********** 2026-04-10 01:10:42.242005 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:10:42.242008 | orchestrator | changed: [testbed-node-1] 2026-04-10 01:10:42.242012 | orchestrator | changed: [testbed-node-2] 2026-04-10 01:10:42.242039 | orchestrator | 2026-04-10 01:10:42.242043 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 01:10:42.242047 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-10 01:10:42.242052 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-10 01:10:42.242056 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-10 01:10:42.242059 | orchestrator | 2026-04-10 01:10:42.242063 | orchestrator | 2026-04-10 01:10:42.242119 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 01:10:42.242125 | orchestrator | Friday 10 April 2026 01:10:39 +0000 (0:00:06.135) 0:04:40.220 ********** 2026-04-10 01:10:42.242137 | orchestrator | =============================================================================== 2026-04-10 01:10:42.242143 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 22.17s 2026-04-10 01:10:42.242150 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 17.11s 2026-04-10 01:10:42.242157 | orchestrator | octavia : Add rules for security groups -------------------------------- 14.43s 2026-04-10 01:10:42.242161 | orchestrator | octavia : Adding octavia related roles --------------------------------- 14.07s 2026-04-10 01:10:42.242164 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.63s 2026-04-10 01:10:42.242168 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.83s 2026-04-10 01:10:42.242172 | orchestrator | octavia : Restart octavia-api container -------------------------------- 10.35s 2026-04-10 01:10:42.242176 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.23s 2026-04-10 01:10:42.242179 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.09s 2026-04-10 01:10:42.242183 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 8.06s 2026-04-10 01:10:42.242187 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.57s 2026-04-10 01:10:42.242191 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.25s 2026-04-10 01:10:42.242194 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 6.14s 2026-04-10 01:10:42.242198 | orchestrator | octavia : Create loadbalancer management network ------------------------ 5.56s 2026-04-10 01:10:42.242202 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.53s 2026-04-10 01:10:42.242210 | orchestrator | octavia : Update Octavia health manager port host_id -------------------- 5.49s 2026-04-10 01:10:42.242214 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.42s 2026-04-10 01:10:42.242217 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.28s 2026-04-10 01:10:42.242221 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.26s 2026-04-10 01:10:42.242225 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.24s 2026-04-10 01:10:45.289513 | orchestrator | 2026-04-10 01:10:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-10 01:10:48.338881 | orchestrator | 2026-04-10 01:10:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-10 01:10:51.380330 | orchestrator | 2026-04-10 01:10:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-10 01:10:54.424966 | orchestrator | 2026-04-10 01:10:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-10 01:10:57.467704 | orchestrator | 2026-04-10 01:10:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-10 01:11:00.500826 | orchestrator | 2026-04-10 01:11:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-10 01:11:03.548630 | orchestrator | 2026-04-10 01:11:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-10 01:11:06.593321 | orchestrator | 2026-04-10 01:11:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-10 01:11:09.631417 | orchestrator | 2026-04-10 01:11:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-10 01:11:12.676051 | orchestrator | 2026-04-10 01:11:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-10 01:11:15.717760 | orchestrator | 2026-04-10 01:11:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-10 01:11:18.764427 | orchestrator | 2026-04-10 01:11:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-10 01:11:21.804985 | orchestrator | 2026-04-10 01:11:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-10 01:11:24.843077 | orchestrator | 2026-04-10 01:11:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-10 01:11:27.889280 | orchestrator | 2026-04-10 01:11:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-10 01:11:30.938773 | orchestrator | 2026-04-10 01:11:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-10 01:11:33.982071 | orchestrator | 2026-04-10 01:11:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-10 01:11:37.020640 | orchestrator | 2026-04-10 01:11:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-10 01:11:40.066136 | orchestrator | 2026-04-10 01:11:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-10 01:11:43.108680 | orchestrator | 2026-04-10 01:11:43.327760 | orchestrator | 2026-04-10 01:11:43.335366 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Fri Apr 10 01:11:43 UTC 2026 2026-04-10 01:11:43.335497 | orchestrator | 2026-04-10 01:11:43.615432 | orchestrator | ok: Runtime: 0:32:07.313788 2026-04-10 01:11:43.845639 | 2026-04-10 01:11:43.845797 | TASK [Bootstrap services] 2026-04-10 01:11:44.604941 | orchestrator | 2026-04-10 01:11:44.605048 | orchestrator | # BOOTSTRAP 2026-04-10 01:11:44.605062 | orchestrator | 2026-04-10 01:11:44.605070 | orchestrator | + set -e 2026-04-10 01:11:44.605077 | orchestrator | + echo 2026-04-10 01:11:44.605086 | orchestrator | + echo '# BOOTSTRAP' 2026-04-10 01:11:44.605095 | orchestrator | + echo 2026-04-10 01:11:44.605828 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-04-10 01:11:44.612855 | orchestrator | + set -e 2026-04-10 01:11:44.612904 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-04-10 01:11:49.430089 | orchestrator | 2026-04-10 01:11:49 | INFO  | It takes a moment until task 83ac11ff-acb2-4efe-9390-bdf9fae941a6 (flavor-manager) has been started and output is visible here. 2026-04-10 01:11:58.013591 | orchestrator | 2026-04-10 01:11:53 | INFO  | Flavor SCS-1L-1 created 2026-04-10 01:11:58.013694 | orchestrator | 2026-04-10 01:11:54 | INFO  | Flavor SCS-1L-1-5 created 2026-04-10 01:11:58.013704 | orchestrator | 2026-04-10 01:11:54 | INFO  | Flavor SCS-1V-2 created 2026-04-10 01:11:58.013709 | orchestrator | 2026-04-10 01:11:54 | INFO  | Flavor SCS-1V-2-5 created 2026-04-10 01:11:58.013713 | orchestrator | 2026-04-10 01:11:54 | INFO  | Flavor SCS-1V-4 created 2026-04-10 01:11:58.013717 | orchestrator | 2026-04-10 01:11:54 | INFO  | Flavor SCS-1V-4-10 created 2026-04-10 01:11:58.013721 | orchestrator | 2026-04-10 01:11:54 | INFO  | Flavor SCS-1V-8 created 2026-04-10 01:11:58.013726 | orchestrator | 2026-04-10 01:11:54 | INFO  | Flavor SCS-1V-8-20 created 2026-04-10 01:11:58.013739 | orchestrator | 2026-04-10 01:11:55 | INFO  | Flavor SCS-2V-4 created 2026-04-10 01:11:58.013743 | orchestrator | 2026-04-10 01:11:55 | INFO  | Flavor SCS-2V-4-10 created 2026-04-10 01:11:58.013746 | orchestrator | 2026-04-10 01:11:55 | INFO  | Flavor SCS-2V-8 created 2026-04-10 01:11:58.013750 | orchestrator | 2026-04-10 01:11:55 | INFO  | Flavor SCS-2V-8-20 created 2026-04-10 01:11:58.013754 | orchestrator | 2026-04-10 01:11:55 | INFO  | Flavor SCS-2V-16 created 2026-04-10 01:11:58.013758 | orchestrator | 2026-04-10 01:11:55 | INFO  | Flavor SCS-2V-16-50 created 2026-04-10 01:11:58.013762 | orchestrator | 2026-04-10 01:11:55 | INFO  | Flavor SCS-4V-8 created 2026-04-10 01:11:58.013766 | orchestrator | 2026-04-10 01:11:55 | INFO  | Flavor SCS-4V-8-20 created 2026-04-10 01:11:58.013770 | orchestrator | 2026-04-10 01:11:55 | INFO  | Flavor SCS-4V-16 created 2026-04-10 01:11:58.013773 | orchestrator | 2026-04-10 01:11:56 | INFO  | Flavor SCS-4V-16-50 created 2026-04-10 01:11:58.013777 | orchestrator | 2026-04-10 01:11:56 | INFO  | Flavor SCS-4V-32 created 2026-04-10 01:11:58.013781 | orchestrator | 2026-04-10 01:11:56 | INFO  | Flavor SCS-4V-32-100 created 2026-04-10 01:11:58.013785 | orchestrator | 2026-04-10 01:11:56 | INFO  | Flavor SCS-8V-16 created 2026-04-10 01:11:58.013789 | orchestrator | 2026-04-10 01:11:56 | INFO  | Flavor SCS-8V-16-50 created 2026-04-10 01:11:58.013793 | orchestrator | 2026-04-10 01:11:56 | INFO  | Flavor SCS-8V-32 created 2026-04-10 01:11:58.013797 | orchestrator | 2026-04-10 01:11:56 | INFO  | Flavor SCS-8V-32-100 created 2026-04-10 01:11:58.013800 | orchestrator | 2026-04-10 01:11:57 | INFO  | Flavor SCS-16V-32 created 2026-04-10 01:11:58.013804 | orchestrator | 2026-04-10 01:11:57 | INFO  | Flavor SCS-16V-32-100 created 2026-04-10 01:11:58.013808 | orchestrator | 2026-04-10 01:11:57 | INFO  | Flavor SCS-2V-4-20s created 2026-04-10 01:11:58.013812 | orchestrator | 2026-04-10 01:11:57 | INFO  | Flavor SCS-4V-8-50s created 2026-04-10 01:11:58.013816 | orchestrator | 2026-04-10 01:11:57 | INFO  | Flavor SCS-4V-16-100s created 2026-04-10 01:11:58.013823 | orchestrator | 2026-04-10 01:11:57 | INFO  | Flavor SCS-8V-32-100s created 2026-04-10 01:11:59.636221 | orchestrator | 2026-04-10 01:11:59 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-04-10 01:12:09.733181 | orchestrator | 2026-04-10 01:12:09 | INFO  | Prepare task for execution of bootstrap-basic. 2026-04-10 01:12:09.817076 | orchestrator | 2026-04-10 01:12:09 | INFO  | Task 3555b4fc-a0f8-4d36-85b8-cdf355a15d78 (bootstrap-basic) was prepared for execution. 2026-04-10 01:12:09.817150 | orchestrator | 2026-04-10 01:12:09 | INFO  | It takes a moment until task 3555b4fc-a0f8-4d36-85b8-cdf355a15d78 (bootstrap-basic) has been started and output is visible here. 2026-04-10 01:12:57.051635 | orchestrator | 2026-04-10 01:12:57.051797 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-04-10 01:12:57.051810 | orchestrator | 2026-04-10 01:12:57.051815 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-10 01:12:57.051820 | orchestrator | Friday 10 April 2026 01:12:13 +0000 (0:00:00.114) 0:00:00.114 ********** 2026-04-10 01:12:57.051824 | orchestrator | ok: [localhost] 2026-04-10 01:12:57.051830 | orchestrator | 2026-04-10 01:12:57.051834 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-04-10 01:12:57.051838 | orchestrator | Friday 10 April 2026 01:12:15 +0000 (0:00:01.968) 0:00:02.082 ********** 2026-04-10 01:12:57.051844 | orchestrator | ok: [localhost] 2026-04-10 01:12:57.051848 | orchestrator | 2026-04-10 01:12:57.051852 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-04-10 01:12:57.051856 | orchestrator | Friday 10 April 2026 01:12:25 +0000 (0:00:09.979) 0:00:12.062 ********** 2026-04-10 01:12:57.051860 | orchestrator | changed: [localhost] 2026-04-10 01:12:57.051865 | orchestrator | 2026-04-10 01:12:57.051869 | orchestrator | TASK [Create public network] *************************************************** 2026-04-10 01:12:57.051876 | orchestrator | Friday 10 April 2026 01:12:32 +0000 (0:00:07.782) 0:00:19.845 ********** 2026-04-10 01:12:57.051881 | orchestrator | changed: [localhost] 2026-04-10 01:12:57.051887 | orchestrator | 2026-04-10 01:12:57.051896 | orchestrator | TASK [Set public network to default] ******************************************* 2026-04-10 01:12:57.051903 | orchestrator | Friday 10 April 2026 01:12:38 +0000 (0:00:05.500) 0:00:25.345 ********** 2026-04-10 01:12:57.051908 | orchestrator | changed: [localhost] 2026-04-10 01:12:57.051914 | orchestrator | 2026-04-10 01:12:57.051919 | orchestrator | TASK [Create public subnet] **************************************************** 2026-04-10 01:12:57.051925 | orchestrator | Friday 10 April 2026 01:12:44 +0000 (0:00:06.222) 0:00:31.567 ********** 2026-04-10 01:12:57.051931 | orchestrator | changed: [localhost] 2026-04-10 01:12:57.051936 | orchestrator | 2026-04-10 01:12:57.051942 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-04-10 01:12:57.051949 | orchestrator | Friday 10 April 2026 01:12:48 +0000 (0:00:04.251) 0:00:35.819 ********** 2026-04-10 01:12:57.051955 | orchestrator | changed: [localhost] 2026-04-10 01:12:57.051961 | orchestrator | 2026-04-10 01:12:57.051967 | orchestrator | TASK [Create manager role] ***************************************************** 2026-04-10 01:12:57.051985 | orchestrator | Friday 10 April 2026 01:12:52 +0000 (0:00:04.077) 0:00:39.897 ********** 2026-04-10 01:12:57.051991 | orchestrator | ok: [localhost] 2026-04-10 01:12:57.051998 | orchestrator | 2026-04-10 01:12:57.052004 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 01:12:57.052011 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-10 01:12:57.052018 | orchestrator | 2026-04-10 01:12:57.052025 | orchestrator | 2026-04-10 01:12:57.052032 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 01:12:57.052038 | orchestrator | Friday 10 April 2026 01:12:56 +0000 (0:00:03.845) 0:00:43.742 ********** 2026-04-10 01:12:57.052045 | orchestrator | =============================================================================== 2026-04-10 01:12:57.052052 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.98s 2026-04-10 01:12:57.052080 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.78s 2026-04-10 01:12:57.052087 | orchestrator | Set public network to default ------------------------------------------- 6.22s 2026-04-10 01:12:57.052093 | orchestrator | Create public network --------------------------------------------------- 5.50s 2026-04-10 01:12:57.052100 | orchestrator | Create public subnet ---------------------------------------------------- 4.25s 2026-04-10 01:12:57.052107 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.08s 2026-04-10 01:12:57.052114 | orchestrator | Create manager role ----------------------------------------------------- 3.85s 2026-04-10 01:12:57.052121 | orchestrator | Gathering Facts --------------------------------------------------------- 1.97s 2026-04-10 01:12:59.080436 | orchestrator | 2026-04-10 01:12:59 | INFO  | It takes a moment until task dc0efb99-3459-47f9-b54c-bec42fbdf201 (image-manager) has been started and output is visible here. 2026-04-10 01:13:43.159584 | orchestrator | 2026-04-10 01:13:01 | INFO  | Processing image 'Cirros 0.6.2' 2026-04-10 01:13:43.159676 | orchestrator | 2026-04-10 01:13:02 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-04-10 01:13:43.159688 | orchestrator | 2026-04-10 01:13:02 | INFO  | Importing image Cirros 0.6.2 2026-04-10 01:13:43.159696 | orchestrator | 2026-04-10 01:13:02 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-10 01:13:43.159704 | orchestrator | 2026-04-10 01:13:04 | INFO  | Waiting for image to leave queued state... 2026-04-10 01:13:43.159712 | orchestrator | 2026-04-10 01:13:06 | INFO  | Waiting for import to complete... 2026-04-10 01:13:43.159719 | orchestrator | 2026-04-10 01:13:16 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-04-10 01:13:43.159727 | orchestrator | 2026-04-10 01:13:17 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-04-10 01:13:43.159734 | orchestrator | 2026-04-10 01:13:17 | INFO  | Setting internal_version = 0.6.2 2026-04-10 01:13:43.159740 | orchestrator | 2026-04-10 01:13:17 | INFO  | Setting image_original_user = cirros 2026-04-10 01:13:43.159754 | orchestrator | 2026-04-10 01:13:17 | INFO  | Adding tag os:cirros 2026-04-10 01:13:43.159760 | orchestrator | 2026-04-10 01:13:17 | INFO  | Setting property architecture: x86_64 2026-04-10 01:13:43.159766 | orchestrator | 2026-04-10 01:13:17 | INFO  | Setting property hw_disk_bus: scsi 2026-04-10 01:13:43.159772 | orchestrator | 2026-04-10 01:13:18 | INFO  | Setting property hw_rng_model: virtio 2026-04-10 01:13:43.159779 | orchestrator | 2026-04-10 01:13:18 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-10 01:13:43.159785 | orchestrator | 2026-04-10 01:13:18 | INFO  | Setting property hw_watchdog_action: reset 2026-04-10 01:13:43.159791 | orchestrator | 2026-04-10 01:13:19 | INFO  | Setting property hypervisor_type: qemu 2026-04-10 01:13:43.159803 | orchestrator | 2026-04-10 01:13:19 | INFO  | Setting property os_distro: cirros 2026-04-10 01:13:43.159810 | orchestrator | 2026-04-10 01:13:19 | INFO  | Setting property os_purpose: minimal 2026-04-10 01:13:43.159816 | orchestrator | 2026-04-10 01:13:20 | INFO  | Setting property replace_frequency: never 2026-04-10 01:13:43.159822 | orchestrator | 2026-04-10 01:13:20 | INFO  | Setting property uuid_validity: none 2026-04-10 01:13:43.159829 | orchestrator | 2026-04-10 01:13:20 | INFO  | Setting property provided_until: none 2026-04-10 01:13:43.159835 | orchestrator | 2026-04-10 01:13:20 | INFO  | Setting property image_description: Cirros 2026-04-10 01:13:43.159841 | orchestrator | 2026-04-10 01:13:21 | INFO  | Setting property image_name: Cirros 2026-04-10 01:13:43.159857 | orchestrator | 2026-04-10 01:13:21 | INFO  | Setting property internal_version: 0.6.2 2026-04-10 01:13:43.159864 | orchestrator | 2026-04-10 01:13:21 | INFO  | Setting property image_original_user: cirros 2026-04-10 01:13:43.159870 | orchestrator | 2026-04-10 01:13:21 | INFO  | Setting property os_version: 0.6.2 2026-04-10 01:13:43.159876 | orchestrator | 2026-04-10 01:13:22 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-10 01:13:43.159883 | orchestrator | 2026-04-10 01:13:22 | INFO  | Setting property image_build_date: 2023-05-30 2026-04-10 01:13:43.159889 | orchestrator | 2026-04-10 01:13:22 | INFO  | Checking status of 'Cirros 0.6.2' 2026-04-10 01:13:43.159895 | orchestrator | 2026-04-10 01:13:22 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-04-10 01:13:43.159903 | orchestrator | 2026-04-10 01:13:22 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-04-10 01:13:43.159909 | orchestrator | 2026-04-10 01:13:22 | INFO  | Processing image 'Cirros 0.6.3' 2026-04-10 01:13:43.159916 | orchestrator | 2026-04-10 01:13:23 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-04-10 01:13:43.159922 | orchestrator | 2026-04-10 01:13:23 | INFO  | Importing image Cirros 0.6.3 2026-04-10 01:13:43.159928 | orchestrator | 2026-04-10 01:13:23 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-10 01:13:43.159934 | orchestrator | 2026-04-10 01:13:24 | INFO  | Waiting for image to leave queued state... 2026-04-10 01:13:43.159940 | orchestrator | 2026-04-10 01:13:26 | INFO  | Waiting for import to complete... 2026-04-10 01:13:43.159958 | orchestrator | 2026-04-10 01:13:37 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-04-10 01:13:43.159964 | orchestrator | 2026-04-10 01:13:37 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-04-10 01:13:43.159970 | orchestrator | 2026-04-10 01:13:37 | INFO  | Setting internal_version = 0.6.3 2026-04-10 01:13:43.159977 | orchestrator | 2026-04-10 01:13:37 | INFO  | Setting image_original_user = cirros 2026-04-10 01:13:43.159982 | orchestrator | 2026-04-10 01:13:37 | INFO  | Adding tag os:cirros 2026-04-10 01:13:43.159989 | orchestrator | 2026-04-10 01:13:37 | INFO  | Setting property architecture: x86_64 2026-04-10 01:13:43.159994 | orchestrator | 2026-04-10 01:13:37 | INFO  | Setting property hw_disk_bus: scsi 2026-04-10 01:13:43.160000 | orchestrator | 2026-04-10 01:13:38 | INFO  | Setting property hw_rng_model: virtio 2026-04-10 01:13:43.160006 | orchestrator | 2026-04-10 01:13:38 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-10 01:13:43.160013 | orchestrator | 2026-04-10 01:13:38 | INFO  | Setting property hw_watchdog_action: reset 2026-04-10 01:13:43.160018 | orchestrator | 2026-04-10 01:13:38 | INFO  | Setting property hypervisor_type: qemu 2026-04-10 01:13:43.160024 | orchestrator | 2026-04-10 01:13:39 | INFO  | Setting property os_distro: cirros 2026-04-10 01:13:43.160030 | orchestrator | 2026-04-10 01:13:39 | INFO  | Setting property os_purpose: minimal 2026-04-10 01:13:43.160036 | orchestrator | 2026-04-10 01:13:39 | INFO  | Setting property replace_frequency: never 2026-04-10 01:13:43.160042 | orchestrator | 2026-04-10 01:13:39 | INFO  | Setting property uuid_validity: none 2026-04-10 01:13:43.160048 | orchestrator | 2026-04-10 01:13:40 | INFO  | Setting property provided_until: none 2026-04-10 01:13:43.160054 | orchestrator | 2026-04-10 01:13:40 | INFO  | Setting property image_description: Cirros 2026-04-10 01:13:43.160065 | orchestrator | 2026-04-10 01:13:40 | INFO  | Setting property image_name: Cirros 2026-04-10 01:13:43.160071 | orchestrator | 2026-04-10 01:13:41 | INFO  | Setting property internal_version: 0.6.3 2026-04-10 01:13:43.160077 | orchestrator | 2026-04-10 01:13:41 | INFO  | Setting property image_original_user: cirros 2026-04-10 01:13:43.160083 | orchestrator | 2026-04-10 01:13:41 | INFO  | Setting property os_version: 0.6.3 2026-04-10 01:13:43.160089 | orchestrator | 2026-04-10 01:13:41 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-10 01:13:43.160095 | orchestrator | 2026-04-10 01:13:42 | INFO  | Setting property image_build_date: 2024-09-26 2026-04-10 01:13:43.160101 | orchestrator | 2026-04-10 01:13:42 | INFO  | Checking status of 'Cirros 0.6.3' 2026-04-10 01:13:43.160107 | orchestrator | 2026-04-10 01:13:42 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-04-10 01:13:43.160113 | orchestrator | 2026-04-10 01:13:42 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-04-10 01:13:43.440822 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amphora-image.sh 2026-04-10 01:13:45.443034 | orchestrator | 2026-04-10 01:13:45 | INFO  | date: 2026-04-09 2026-04-10 01:13:45.443121 | orchestrator | 2026-04-10 01:13:45 | INFO  | image: octavia-amphora-haproxy-2024.2.20260409.qcow2 2026-04-10 01:13:45.443145 | orchestrator | 2026-04-10 01:13:45 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260409.qcow2 2026-04-10 01:13:45.443161 | orchestrator | 2026-04-10 01:13:45 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260409.qcow2.CHECKSUM 2026-04-10 01:13:45.609695 | orchestrator | 2026-04-10 01:13:45 | INFO  | checksum: 8d87a584e20490e0986eb683817610aad621ddd76b8738398584d5449d1a8f22 2026-04-10 01:13:45.705245 | orchestrator | 2026-04-10 01:13:45 | INFO  | It takes a moment until task 7cbbb42c-cb93-4656-aeda-a07411a539c6 (image-manager) has been started and output is visible here. 2026-04-10 01:14:48.335694 | orchestrator | 2026-04-10 01:13:47 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-04-09' 2026-04-10 01:14:48.335831 | orchestrator | 2026-04-10 01:13:48 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260409.qcow2: 200 2026-04-10 01:14:48.335853 | orchestrator | 2026-04-10 01:13:48 | INFO  | Importing image OpenStack Octavia Amphora 2026-04-09 2026-04-10 01:14:48.335858 | orchestrator | 2026-04-10 01:13:48 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260409.qcow2 2026-04-10 01:14:48.335863 | orchestrator | 2026-04-10 01:13:49 | INFO  | Waiting for image to leave queued state... 2026-04-10 01:14:48.335873 | orchestrator | 2026-04-10 01:13:51 | INFO  | Waiting for import to complete... 2026-04-10 01:14:48.335878 | orchestrator | 2026-04-10 01:14:01 | INFO  | Waiting for import to complete... 2026-04-10 01:14:48.335881 | orchestrator | 2026-04-10 01:14:11 | INFO  | Waiting for import to complete... 2026-04-10 01:14:48.335886 | orchestrator | 2026-04-10 01:14:21 | INFO  | Waiting for import to complete... 2026-04-10 01:14:48.335892 | orchestrator | 2026-04-10 01:14:31 | INFO  | Waiting for import to complete... 2026-04-10 01:14:48.335896 | orchestrator | 2026-04-10 01:14:42 | INFO  | Import of 'OpenStack Octavia Amphora 2026-04-09' successfully completed, reloading images 2026-04-10 01:14:48.335917 | orchestrator | 2026-04-10 01:14:42 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-04-09' 2026-04-10 01:14:48.335921 | orchestrator | 2026-04-10 01:14:42 | INFO  | Setting internal_version = 2026-04-09 2026-04-10 01:14:48.335925 | orchestrator | 2026-04-10 01:14:42 | INFO  | Setting image_original_user = ubuntu 2026-04-10 01:14:48.335930 | orchestrator | 2026-04-10 01:14:42 | INFO  | Adding tag amphora 2026-04-10 01:14:48.335934 | orchestrator | 2026-04-10 01:14:43 | INFO  | Adding tag os:ubuntu 2026-04-10 01:14:48.335938 | orchestrator | 2026-04-10 01:14:43 | INFO  | Setting property architecture: x86_64 2026-04-10 01:14:48.335941 | orchestrator | 2026-04-10 01:14:43 | INFO  | Setting property hw_disk_bus: scsi 2026-04-10 01:14:48.335945 | orchestrator | 2026-04-10 01:14:43 | INFO  | Setting property hw_rng_model: virtio 2026-04-10 01:14:48.335949 | orchestrator | 2026-04-10 01:14:44 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-10 01:14:48.335953 | orchestrator | 2026-04-10 01:14:44 | INFO  | Setting property hw_watchdog_action: reset 2026-04-10 01:14:48.335957 | orchestrator | 2026-04-10 01:14:44 | INFO  | Setting property hypervisor_type: qemu 2026-04-10 01:14:48.335961 | orchestrator | 2026-04-10 01:14:44 | INFO  | Setting property os_distro: ubuntu 2026-04-10 01:14:48.335965 | orchestrator | 2026-04-10 01:14:44 | INFO  | Setting property replace_frequency: quarterly 2026-04-10 01:14:48.335969 | orchestrator | 2026-04-10 01:14:45 | INFO  | Setting property uuid_validity: last-1 2026-04-10 01:14:48.335973 | orchestrator | 2026-04-10 01:14:45 | INFO  | Setting property provided_until: none 2026-04-10 01:14:48.335978 | orchestrator | 2026-04-10 01:14:45 | INFO  | Setting property os_purpose: network 2026-04-10 01:14:48.335984 | orchestrator | 2026-04-10 01:14:46 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-04-10 01:14:48.336003 | orchestrator | 2026-04-10 01:14:46 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-04-10 01:14:48.336010 | orchestrator | 2026-04-10 01:14:46 | INFO  | Setting property internal_version: 2026-04-09 2026-04-10 01:14:48.336015 | orchestrator | 2026-04-10 01:14:46 | INFO  | Setting property image_original_user: ubuntu 2026-04-10 01:14:48.336021 | orchestrator | 2026-04-10 01:14:47 | INFO  | Setting property os_version: 2026-04-09 2026-04-10 01:14:48.336028 | orchestrator | 2026-04-10 01:14:47 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260409.qcow2 2026-04-10 01:14:48.336033 | orchestrator | 2026-04-10 01:14:47 | INFO  | Setting property image_build_date: 2026-04-09 2026-04-10 01:14:48.336039 | orchestrator | 2026-04-10 01:14:47 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-04-09' 2026-04-10 01:14:48.336045 | orchestrator | 2026-04-10 01:14:47 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-04-09' 2026-04-10 01:14:48.336051 | orchestrator | 2026-04-10 01:14:48 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-04-10 01:14:48.336073 | orchestrator | 2026-04-10 01:14:48 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-04-10 01:14:48.336081 | orchestrator | 2026-04-10 01:14:48 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-04-10 01:14:48.336087 | orchestrator | 2026-04-10 01:14:48 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-04-10 01:14:48.999393 | orchestrator | ok: Runtime: 0:03:04.363119 2026-04-10 01:14:49.022281 | 2026-04-10 01:14:49.022433 | TASK [Run checks] 2026-04-10 01:14:49.763120 | orchestrator | + set -e 2026-04-10 01:14:49.763300 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-10 01:14:49.763324 | orchestrator | ++ export INTERACTIVE=false 2026-04-10 01:14:49.763337 | orchestrator | ++ INTERACTIVE=false 2026-04-10 01:14:49.763347 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-10 01:14:49.763354 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-10 01:14:49.763371 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-10 01:14:49.763767 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-10 01:14:49.770097 | orchestrator | 2026-04-10 01:14:49.770176 | orchestrator | # CHECK 2026-04-10 01:14:49.770181 | orchestrator | 2026-04-10 01:14:49.770186 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-10 01:14:49.770194 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-10 01:14:49.770199 | orchestrator | + echo 2026-04-10 01:14:49.770203 | orchestrator | + echo '# CHECK' 2026-04-10 01:14:49.770207 | orchestrator | + echo 2026-04-10 01:14:49.770256 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-10 01:14:49.770999 | orchestrator | ++ semver latest 5.0.0 2026-04-10 01:14:49.834329 | orchestrator | 2026-04-10 01:14:49.834410 | orchestrator | ## Containers @ testbed-manager 2026-04-10 01:14:49.834417 | orchestrator | 2026-04-10 01:14:49.834424 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-10 01:14:49.834428 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-10 01:14:49.834433 | orchestrator | + echo 2026-04-10 01:14:49.834437 | orchestrator | + echo '## Containers @ testbed-manager' 2026-04-10 01:14:49.834443 | orchestrator | + echo 2026-04-10 01:14:49.834447 | orchestrator | + osism container testbed-manager ps 2026-04-10 01:14:50.997038 | orchestrator | 2026-04-10 01:14:50 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-04-10 01:14:51.376944 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-10 01:14:51.377059 | orchestrator | 2f8b9c3eaec6 registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_blackbox_exporter 2026-04-10 01:14:51.377083 | orchestrator | 4dee833d5694 registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_alertmanager 2026-04-10 01:14:51.377119 | orchestrator | fa4ba355adb2 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2026-04-10 01:14:51.377128 | orchestrator | 5717c6a335da registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-04-10 01:14:51.377141 | orchestrator | cd0339e52666 registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_server 2026-04-10 01:14:51.377149 | orchestrator | 4193d84efb4a registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 16 minutes ago Up 16 minutes cephclient 2026-04-10 01:14:51.377157 | orchestrator | 4ae256c7e9a2 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes cron 2026-04-10 01:14:51.377165 | orchestrator | 290266b18125 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2026-04-10 01:14:51.377207 | orchestrator | 5e0e4c8d617b registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes fluentd 2026-04-10 01:14:51.377215 | orchestrator | a712810ac539 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 29 minutes ago Up 28 minutes (healthy) 80/tcp phpmyadmin 2026-04-10 01:14:51.377222 | orchestrator | 6de7b2e3c45b registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 29 minutes ago Up 29 minutes openstackclient 2026-04-10 01:14:51.377230 | orchestrator | 594b4858302f registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 29 minutes ago Up 29 minutes (healthy) 8080/tcp homer 2026-04-10 01:14:51.377238 | orchestrator | 52cd31f6997b registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 52 minutes ago Up 52 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2026-04-10 01:14:51.377245 | orchestrator | 7da91ca5bf1f registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 56 minutes ago Up 35 minutes (healthy) manager-inventory_reconciler-1 2026-04-10 01:14:51.377253 | orchestrator | 4294d60a2f15 registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 56 minutes ago Up 36 minutes (healthy) ceph-ansible 2026-04-10 01:14:51.377276 | orchestrator | 6be717793cf4 registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 56 minutes ago Up 36 minutes (healthy) osism-kubernetes 2026-04-10 01:14:51.377290 | orchestrator | 1b817ae6f6e3 registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 56 minutes ago Up 36 minutes (healthy) osism-ansible 2026-04-10 01:14:51.377298 | orchestrator | f87ff0a450ff registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" 56 minutes ago Up 36 minutes (healthy) kolla-ansible 2026-04-10 01:14:51.377305 | orchestrator | 551a8c033dec registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 56 minutes ago Up 36 minutes (healthy) 8000/tcp manager-ara-server-1 2026-04-10 01:14:51.377313 | orchestrator | ac43aeeae57b registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) manager-openstack-1 2026-04-10 01:14:51.377320 | orchestrator | c06565c8f2c5 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) manager-beat-1 2026-04-10 01:14:51.377327 | orchestrator | ddd455e77f6f registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 56 minutes ago Up 36 minutes (healthy) osismclient 2026-04-10 01:14:51.377335 | orchestrator | 558eaad43b00 registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" 56 minutes ago Up 36 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2026-04-10 01:14:51.377348 | orchestrator | 03d019aabfc4 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) manager-flower-1 2026-04-10 01:14:51.377355 | orchestrator | ea9c84c1a0cf registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 56 minutes ago Up 36 minutes (healthy) 3306/tcp manager-mariadb-1 2026-04-10 01:14:51.377363 | orchestrator | 3482be51017f registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 56 minutes ago Up 36 minutes (healthy) 6379/tcp manager-redis-1 2026-04-10 01:14:51.377370 | orchestrator | c0da102bd753 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-04-10 01:14:51.377378 | orchestrator | b4a5ff1a850d registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 36 minutes (healthy) manager-listener-1 2026-04-10 01:14:51.377385 | orchestrator | 55951c1d9b6d registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 58 minutes ago Up 58 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-04-10 01:14:51.524901 | orchestrator | 2026-04-10 01:14:51.524998 | orchestrator | ## Images @ testbed-manager 2026-04-10 01:14:51.525008 | orchestrator | 2026-04-10 01:14:51.525017 | orchestrator | + echo 2026-04-10 01:14:51.525024 | orchestrator | + echo '## Images @ testbed-manager' 2026-04-10 01:14:51.525033 | orchestrator | + echo 2026-04-10 01:14:51.525045 | orchestrator | + osism container testbed-manager images 2026-04-10 01:14:53.004290 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-10 01:14:53.004382 | orchestrator | registry.osism.tech/osism/osism-ansible latest 2618df6e7a57 About an hour ago 638MB 2026-04-10 01:14:53.004392 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 4a2db44924b7 About an hour ago 636MB 2026-04-10 01:14:53.004397 | orchestrator | registry.osism.tech/osism/ceph-ansible reef 37e7ba8fa666 About an hour ago 585MB 2026-04-10 01:14:53.004401 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest b5197cd4d7dc About an hour ago 1.24GB 2026-04-10 01:14:53.004423 | orchestrator | registry.osism.tech/osism/osism latest 0421f1ba628f About an hour ago 408MB 2026-04-10 01:14:53.004427 | orchestrator | registry.osism.tech/osism/osism-frontend latest fff0723bf0f3 About an hour ago 212MB 2026-04-10 01:14:53.004432 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest 1be2aa599ef5 About an hour ago 357MB 2026-04-10 01:14:53.004436 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 72f9072e8d86 21 hours ago 239MB 2026-04-10 01:14:53.004440 | orchestrator | registry.osism.tech/osism/cephclient reef 31aefb641332 21 hours ago 453MB 2026-04-10 01:14:53.004444 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 c1b3cb67b1fe 45 hours ago 404MB 2026-04-10 01:14:53.004448 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 b70ba58fb0aa 45 hours ago 357MB 2026-04-10 01:14:53.004451 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 3d6347c81b05 45 hours ago 308MB 2026-04-10 01:14:53.004455 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 e750d96ecfc5 45 hours ago 306MB 2026-04-10 01:14:53.004481 | orchestrator | registry.osism.tech/kolla/cron 2024.2 3264740a29b5 47 hours ago 265MB 2026-04-10 01:14:53.004506 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 b8a664c9cb1b 47 hours ago 579MB 2026-04-10 01:14:53.004512 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 e0a7aa0c103d 47 hours ago 668MB 2026-04-10 01:14:53.004518 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 6484c96cb268 47 hours ago 839MB 2026-04-10 01:14:53.004523 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 2 months ago 41.4MB 2026-04-10 01:14:53.004529 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 4 months ago 11.5MB 2026-04-10 01:14:53.004535 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-04-10 01:14:53.004541 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 6 months ago 742MB 2026-04-10 01:14:53.004547 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-04-10 01:14:53.004553 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-04-10 01:14:53.004560 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 22 months ago 146MB 2026-04-10 01:14:53.177449 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-10 01:14:53.177620 | orchestrator | ++ semver latest 5.0.0 2026-04-10 01:14:53.221169 | orchestrator | 2026-04-10 01:14:53.221254 | orchestrator | ## Containers @ testbed-node-0 2026-04-10 01:14:53.221262 | orchestrator | 2026-04-10 01:14:53.221266 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-10 01:14:53.221270 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-10 01:14:53.221275 | orchestrator | + echo 2026-04-10 01:14:53.221280 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-04-10 01:14:53.221286 | orchestrator | + echo 2026-04-10 01:14:53.221292 | orchestrator | + osism container testbed-node-0 ps 2026-04-10 01:14:54.653477 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-10 01:14:54.653559 | orchestrator | 16ab8b4d029a registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-04-10 01:14:54.653567 | orchestrator | 218fdfc58693 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-04-10 01:14:54.653571 | orchestrator | ed899fae7c79 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-04-10 01:14:54.653576 | orchestrator | 4c0a34392872 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-04-10 01:14:54.653580 | orchestrator | 323fac162f50 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-04-10 01:14:54.653584 | orchestrator | 3ef180692b22 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-04-10 01:14:54.653587 | orchestrator | c9cdf7421f84 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2026-04-10 01:14:54.653608 | orchestrator | c3579d18fa25 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2026-04-10 01:14:54.653612 | orchestrator | 942322c492ce registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2026-04-10 01:14:54.653631 | orchestrator | eaccc5798f6f registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2026-04-10 01:14:54.653636 | orchestrator | bb473afed8be registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2026-04-10 01:14:54.653640 | orchestrator | c928b2a23e8d registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2026-04-10 01:14:54.653645 | orchestrator | dcb53e84ef49 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2026-04-10 01:14:54.653651 | orchestrator | 8158a1ec27dd registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2026-04-10 01:14:54.653661 | orchestrator | 6139b4f397e1 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2026-04-10 01:14:54.653668 | orchestrator | aa59f08a0822 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2026-04-10 01:14:54.653674 | orchestrator | 42e97059bbd3 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2026-04-10 01:14:54.653681 | orchestrator | 71a47c4b995f registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2026-04-10 01:14:54.653686 | orchestrator | f3a29d6ff454 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_api 2026-04-10 01:14:54.653692 | orchestrator | 757d9a27898c registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2026-04-10 01:14:54.653697 | orchestrator | b1d0502d6db7 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-04-10 01:14:54.653714 | orchestrator | d104cccec106 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2026-04-10 01:14:54.653720 | orchestrator | 1f9b0b3ce010 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2026-04-10 01:14:54.653726 | orchestrator | 1f73bf3fe949 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_backup 2026-04-10 01:14:54.653733 | orchestrator | bef0bd045ad8 registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_volume 2026-04-10 01:14:54.653742 | orchestrator | d6f76e356aeb registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-04-10 01:14:54.653748 | orchestrator | 53a767a285f8 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2026-04-10 01:14:54.653755 | orchestrator | db83e37ebdf1 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2026-04-10 01:14:54.653767 | orchestrator | 7964332a9eb3 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2026-04-10 01:14:54.653785 | orchestrator | ccb1ff7c3785 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2026-04-10 01:14:54.653790 | orchestrator | 81ce84550d33 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2026-04-10 01:14:54.653793 | orchestrator | bdd7669be84f registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2026-04-10 01:14:54.653797 | orchestrator | e3f1d845398b registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-04-10 01:14:54.653801 | orchestrator | b0f8f66484af registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-0 2026-04-10 01:14:54.653805 | orchestrator | f9e61528beb4 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2026-04-10 01:14:54.653809 | orchestrator | 8c4292375f39 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_fernet 2026-04-10 01:14:54.653813 | orchestrator | 6824f2ea0a1e registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2026-04-10 01:14:54.653817 | orchestrator | 0dfc464e459e registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2026-04-10 01:14:54.653820 | orchestrator | 1e317baa3a1a registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2026-04-10 01:14:54.653824 | orchestrator | 0126d316622a registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2026-04-10 01:14:54.653828 | orchestrator | 7259b639cfcc registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2026-04-10 01:14:54.653832 | orchestrator | 6e30f358a21b registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-0 2026-04-10 01:14:54.653836 | orchestrator | fc00af9050c0 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 21 minutes keepalived 2026-04-10 01:14:54.653840 | orchestrator | cd6e5394b6af registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2026-04-10 01:14:54.653849 | orchestrator | a1d266357662 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2026-04-10 01:14:54.653853 | orchestrator | 72a7f5c4c87b registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_northd 2026-04-10 01:14:54.653857 | orchestrator | 4b3fd7e24350 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_sb_db 2026-04-10 01:14:54.653861 | orchestrator | fc7f07bc049b registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_nb_db 2026-04-10 01:14:54.653871 | orchestrator | 5e279f46fe2c registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 25 minutes ago Up 25 minutes ceph-mon-testbed-node-0 2026-04-10 01:14:54.653875 | orchestrator | 63072de27e2f registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2026-04-10 01:14:54.653879 | orchestrator | 0d580033d8f3 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2026-04-10 01:14:54.653883 | orchestrator | 20c93cd33790 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes (healthy) openvswitch_vswitchd 2026-04-10 01:14:54.653887 | orchestrator | 29f1a94b3a58 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2026-04-10 01:14:54.653891 | orchestrator | f312a98ce5bd registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2026-04-10 01:14:54.653897 | orchestrator | 4e76436b4d10 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2026-04-10 01:14:54.653901 | orchestrator | 7f853cba0ea1 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2026-04-10 01:14:54.653905 | orchestrator | 82ee36d1faf8 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes cron 2026-04-10 01:14:54.653909 | orchestrator | c8e6ef396297 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2026-04-10 01:14:54.653913 | orchestrator | 2d60b0d45157 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2026-04-10 01:14:54.786622 | orchestrator | 2026-04-10 01:14:54.786686 | orchestrator | ## Images @ testbed-node-0 2026-04-10 01:14:54.786697 | orchestrator | 2026-04-10 01:14:54.786705 | orchestrator | + echo 2026-04-10 01:14:54.786712 | orchestrator | + echo '## Images @ testbed-node-0' 2026-04-10 01:14:54.786720 | orchestrator | + echo 2026-04-10 01:14:54.786726 | orchestrator | + osism container testbed-node-0 images 2026-04-10 01:14:56.311335 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-10 01:14:56.311431 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 45025d19a8a7 16 hours ago 848MB 2026-04-10 01:14:56.311439 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 cd763de3e75b 16 hours ago 848MB 2026-04-10 01:14:56.311444 | orchestrator | registry.osism.tech/osism/ceph-daemon reef da85555ca3b8 16 hours ago 1.35GB 2026-04-10 01:14:56.311451 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 4c5bda7121dd 45 hours ago 266MB 2026-04-10 01:14:56.311506 | orchestrator | registry.osism.tech/kolla/redis 2024.2 1b423135131d 45 hours ago 273MB 2026-04-10 01:14:56.311513 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 20e411de4aa7 45 hours ago 273MB 2026-04-10 01:14:56.311519 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 ed0a26f28f7c 45 hours ago 452MB 2026-04-10 01:14:56.311525 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 799699931a41 45 hours ago 298MB 2026-04-10 01:14:56.311531 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 b70ba58fb0aa 45 hours ago 357MB 2026-04-10 01:14:56.311537 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 8b7b44f2563a 45 hours ago 292MB 2026-04-10 01:14:56.311559 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 58219dd9eee5 45 hours ago 301MB 2026-04-10 01:14:56.311567 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 e750d96ecfc5 45 hours ago 306MB 2026-04-10 01:14:56.311574 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 304562932cfa 45 hours ago 279MB 2026-04-10 01:14:56.311582 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 e3352e08634e 45 hours ago 279MB 2026-04-10 01:14:56.311589 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 b3cfae2d4a21 45 hours ago 975MB 2026-04-10 01:14:56.311597 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 aefbc46ee397 45 hours ago 1.4GB 2026-04-10 01:14:56.311614 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 61b46b13fe15 45 hours ago 1.41GB 2026-04-10 01:14:56.311622 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 6cafd41453ca 45 hours ago 1.41GB 2026-04-10 01:14:56.311630 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 31ecb4717921 45 hours ago 1.72GB 2026-04-10 01:14:56.311637 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 0e8d7891d417 45 hours ago 990MB 2026-04-10 01:14:56.311644 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 d04f4045e6e0 45 hours ago 991MB 2026-04-10 01:14:56.311652 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 e3fd7619dfad 45 hours ago 991MB 2026-04-10 01:14:56.311658 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 e66da8e4e8e4 45 hours ago 1.16GB 2026-04-10 01:14:56.311663 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 173d7508c3d0 45 hours ago 1.04GB 2026-04-10 01:14:56.311669 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 e625c11d2aba 45 hours ago 1.04GB 2026-04-10 01:14:56.311675 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 5e4193e479dd 45 hours ago 1.07GB 2026-04-10 01:14:56.311681 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 133764135858 45 hours ago 1.13GB 2026-04-10 01:14:56.311687 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 dd2b3fb7f1cd 45 hours ago 1.24GB 2026-04-10 01:14:56.311692 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 a5ae6c2a915f 45 hours ago 976MB 2026-04-10 01:14:56.311698 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 d0944801676c 45 hours ago 975MB 2026-04-10 01:14:56.311703 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 5b4036922655 45 hours ago 1.03GB 2026-04-10 01:14:56.311709 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 dbdb26832643 45 hours ago 1.05GB 2026-04-10 01:14:56.311714 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 c4d30b7728c1 45 hours ago 1.03GB 2026-04-10 01:14:56.311719 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 47c9d89c9659 45 hours ago 1.05GB 2026-04-10 01:14:56.311724 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 7ecfa7c2d4c0 45 hours ago 1.03GB 2026-04-10 01:14:56.311729 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 cf471ac8c087 45 hours ago 1.1GB 2026-04-10 01:14:56.311762 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 6c2771325ef1 45 hours ago 989MB 2026-04-10 01:14:56.311768 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 1ca54700db6e 45 hours ago 983MB 2026-04-10 01:14:56.311774 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 f12ce9cf8572 45 hours ago 984MB 2026-04-10 01:14:56.311779 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 71d34dfb5386 45 hours ago 984MB 2026-04-10 01:14:56.311798 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 49c5d2e5a9c9 45 hours ago 989MB 2026-04-10 01:14:56.311804 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 e0b3465740e7 45 hours ago 984MB 2026-04-10 01:14:56.311809 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 5eb0eb38814b 45 hours ago 990MB 2026-04-10 01:14:56.311815 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 ad40fa21c96b 45 hours ago 1.05GB 2026-04-10 01:14:56.311820 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 fdb68ba12480 45 hours ago 974MB 2026-04-10 01:14:56.311830 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 41aed91cb434 45 hours ago 974MB 2026-04-10 01:14:56.311835 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 06b1e3f48771 45 hours ago 974MB 2026-04-10 01:14:56.311841 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 45e213225baf 45 hours ago 973MB 2026-04-10 01:14:56.311846 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 21078444d17b 45 hours ago 1.21GB 2026-04-10 01:14:56.311852 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 43d66ea212d8 45 hours ago 1.37GB 2026-04-10 01:14:56.311858 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 7fd0568028b5 45 hours ago 1.21GB 2026-04-10 01:14:56.311864 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 9c2a462a150e 45 hours ago 1.21GB 2026-04-10 01:14:56.311869 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 000506bf22df 45 hours ago 840MB 2026-04-10 01:14:56.311875 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 dacd1a06688c 45 hours ago 840MB 2026-04-10 01:14:56.311881 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 edddd92cc686 47 hours ago 1.56GB 2026-04-10 01:14:56.311887 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 15bb65e2b02e 47 hours ago 1.53GB 2026-04-10 01:14:56.311892 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 ddd2e742b66d 47 hours ago 276MB 2026-04-10 01:14:56.311898 | orchestrator | registry.osism.tech/kolla/cron 2024.2 3264740a29b5 47 hours ago 265MB 2026-04-10 01:14:56.311905 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 987bccb7e29c 47 hours ago 1.03GB 2026-04-10 01:14:56.311910 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 289da7c7eeb7 47 hours ago 322MB 2026-04-10 01:14:56.311916 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 c28009080316 47 hours ago 274MB 2026-04-10 01:14:56.311922 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 d9085bb7b182 47 hours ago 411MB 2026-04-10 01:14:56.311928 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 b8a664c9cb1b 47 hours ago 579MB 2026-04-10 01:14:56.311934 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 e0a7aa0c103d 47 hours ago 668MB 2026-04-10 01:14:56.311939 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 0f4a765fdbd2 47 hours ago 1.15GB 2026-04-10 01:14:56.453272 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-10 01:14:56.453791 | orchestrator | ++ semver latest 5.0.0 2026-04-10 01:14:56.501615 | orchestrator | 2026-04-10 01:14:56.501663 | orchestrator | ## Containers @ testbed-node-1 2026-04-10 01:14:56.501672 | orchestrator | 2026-04-10 01:14:56.501677 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-10 01:14:56.501682 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-10 01:14:56.501687 | orchestrator | + echo 2026-04-10 01:14:56.501692 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-04-10 01:14:56.501698 | orchestrator | + echo 2026-04-10 01:14:56.501704 | orchestrator | + osism container testbed-node-1 ps 2026-04-10 01:14:57.955389 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-10 01:14:57.955443 | orchestrator | 0a8e3f507454 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-04-10 01:14:57.955449 | orchestrator | 7a4da829704a registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-04-10 01:14:57.955454 | orchestrator | 81678d1bb638 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-04-10 01:14:57.955487 | orchestrator | 14a3d78ca6b7 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-04-10 01:14:57.955491 | orchestrator | c8d58f54c7bf registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-04-10 01:14:57.955511 | orchestrator | 4a58c0951f8e registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-04-10 01:14:57.955516 | orchestrator | 51964c9f2325 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2026-04-10 01:14:57.955520 | orchestrator | c5013a259a97 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2026-04-10 01:14:57.955526 | orchestrator | 8a3facf8074c registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2026-04-10 01:14:57.955530 | orchestrator | 267c0cbca12d registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2026-04-10 01:14:57.955534 | orchestrator | e2c1672e44d5 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2026-04-10 01:14:57.955538 | orchestrator | 8e25b646d289 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2026-04-10 01:14:57.955542 | orchestrator | ce40e1244c18 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2026-04-10 01:14:57.955546 | orchestrator | a8f8a791721f registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2026-04-10 01:14:57.955550 | orchestrator | 39ca3da9bbb9 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2026-04-10 01:14:57.955554 | orchestrator | 6d9a5729aaf7 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2026-04-10 01:14:57.955558 | orchestrator | 9f00a87ab1d2 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2026-04-10 01:14:57.955562 | orchestrator | 84e4f660be93 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2026-04-10 01:14:57.955566 | orchestrator | 97e1b32fb65c registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_api 2026-04-10 01:14:57.955580 | orchestrator | af4394c74af6 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 8 minutes (healthy) nova_scheduler 2026-04-10 01:14:57.955584 | orchestrator | a4daf86e5b52 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2026-04-10 01:14:57.955596 | orchestrator | afbfcfb294ac registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2026-04-10 01:14:57.955600 | orchestrator | 177c81b34dbc registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2026-04-10 01:14:57.955604 | orchestrator | 777689af0530 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_backup 2026-04-10 01:14:57.955611 | orchestrator | d1b39af7c4dd registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_volume 2026-04-10 01:14:57.955618 | orchestrator | e0a259d4d2a4 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2026-04-10 01:14:57.955623 | orchestrator | c748841e3f85 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-04-10 01:14:57.955633 | orchestrator | d11928754561 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2026-04-10 01:14:57.955640 | orchestrator | 15e687183f16 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2026-04-10 01:14:57.955646 | orchestrator | d1c828aaf448 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2026-04-10 01:14:57.955652 | orchestrator | 93a8a33d1fab registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2026-04-10 01:14:57.955658 | orchestrator | f0d190dccfcc registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2026-04-10 01:14:57.955664 | orchestrator | 1197976d058a registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-04-10 01:14:57.955670 | orchestrator | 9205f81c6a56 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-1 2026-04-10 01:14:57.955676 | orchestrator | b7b7eaba735d registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2026-04-10 01:14:57.955683 | orchestrator | 8ba64b919ea4 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_fernet 2026-04-10 01:14:57.955689 | orchestrator | a8bd4fb30a07 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_ssh 2026-04-10 01:14:57.955695 | orchestrator | 767109bb739b registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2026-04-10 01:14:57.955700 | orchestrator | 65b0cf88ab1d registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2026-04-10 01:14:57.955711 | orchestrator | d10bdf845e0f registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2026-04-10 01:14:57.955717 | orchestrator | 28316bef9f03 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2026-04-10 01:14:57.955724 | orchestrator | 5067b892f8a2 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-1 2026-04-10 01:14:57.955730 | orchestrator | 5f49bd73c5f2 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2026-04-10 01:14:57.955737 | orchestrator | 70ad88c543e8 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2026-04-10 01:14:57.955750 | orchestrator | 116fc2def2d6 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2026-04-10 01:14:57.955757 | orchestrator | a2ff4c38bbed registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_northd 2026-04-10 01:14:57.955764 | orchestrator | 4cee7bc56db7 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_sb_db 2026-04-10 01:14:57.955770 | orchestrator | 9c7de34edba1 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_nb_db 2026-04-10 01:14:57.955777 | orchestrator | fbe9cee944e3 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 25 minutes ago Up 25 minutes ceph-mon-testbed-node-1 2026-04-10 01:14:57.955783 | orchestrator | 0000a8edab0e registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_controller 2026-04-10 01:14:57.955790 | orchestrator | e7f44241eb54 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) rabbitmq 2026-04-10 01:14:57.955794 | orchestrator | c72face6f71f registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes (healthy) openvswitch_vswitchd 2026-04-10 01:14:57.955801 | orchestrator | ccfc3981dcc7 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2026-04-10 01:14:57.955805 | orchestrator | 83139bd20791 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2026-04-10 01:14:57.955809 | orchestrator | 30424281f92a registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2026-04-10 01:14:57.955813 | orchestrator | c2c81afa8cf1 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2026-04-10 01:14:57.955817 | orchestrator | c1293d13ee53 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes cron 2026-04-10 01:14:57.955822 | orchestrator | f9548134c985 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2026-04-10 01:14:57.955834 | orchestrator | 4f0ab836e8e0 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes fluentd 2026-04-10 01:14:58.112230 | orchestrator | 2026-04-10 01:14:58.112295 | orchestrator | ## Images @ testbed-node-1 2026-04-10 01:14:58.112304 | orchestrator | 2026-04-10 01:14:58.112310 | orchestrator | + echo 2026-04-10 01:14:58.112316 | orchestrator | + echo '## Images @ testbed-node-1' 2026-04-10 01:14:58.112323 | orchestrator | + echo 2026-04-10 01:14:58.112329 | orchestrator | + osism container testbed-node-1 images 2026-04-10 01:14:59.528021 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-10 01:14:59.528121 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 45025d19a8a7 16 hours ago 848MB 2026-04-10 01:14:59.528131 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 cd763de3e75b 16 hours ago 848MB 2026-04-10 01:14:59.528138 | orchestrator | registry.osism.tech/osism/ceph-daemon reef da85555ca3b8 16 hours ago 1.35GB 2026-04-10 01:14:59.528146 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 4c5bda7121dd 45 hours ago 266MB 2026-04-10 01:14:59.528153 | orchestrator | registry.osism.tech/kolla/redis 2024.2 1b423135131d 45 hours ago 273MB 2026-04-10 01:14:59.528159 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 20e411de4aa7 45 hours ago 273MB 2026-04-10 01:14:59.528165 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 ed0a26f28f7c 45 hours ago 452MB 2026-04-10 01:14:59.528171 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 799699931a41 45 hours ago 298MB 2026-04-10 01:14:59.528178 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 b70ba58fb0aa 45 hours ago 357MB 2026-04-10 01:14:59.528184 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 8b7b44f2563a 45 hours ago 292MB 2026-04-10 01:14:59.528190 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 58219dd9eee5 45 hours ago 301MB 2026-04-10 01:14:59.528197 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 e750d96ecfc5 45 hours ago 306MB 2026-04-10 01:14:59.528203 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 304562932cfa 45 hours ago 279MB 2026-04-10 01:14:59.528209 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 e3352e08634e 45 hours ago 279MB 2026-04-10 01:14:59.528215 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 b3cfae2d4a21 45 hours ago 975MB 2026-04-10 01:14:59.528221 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 aefbc46ee397 45 hours ago 1.4GB 2026-04-10 01:14:59.528227 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 61b46b13fe15 45 hours ago 1.41GB 2026-04-10 01:14:59.528233 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 6cafd41453ca 45 hours ago 1.41GB 2026-04-10 01:14:59.528239 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 31ecb4717921 45 hours ago 1.72GB 2026-04-10 01:14:59.528246 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 0e8d7891d417 45 hours ago 990MB 2026-04-10 01:14:59.528252 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 d04f4045e6e0 45 hours ago 991MB 2026-04-10 01:14:59.528258 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 e3fd7619dfad 45 hours ago 991MB 2026-04-10 01:14:59.528264 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 e66da8e4e8e4 45 hours ago 1.16GB 2026-04-10 01:14:59.528270 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 173d7508c3d0 45 hours ago 1.04GB 2026-04-10 01:14:59.528276 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 e625c11d2aba 45 hours ago 1.04GB 2026-04-10 01:14:59.528304 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 5e4193e479dd 45 hours ago 1.07GB 2026-04-10 01:14:59.528312 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 133764135858 45 hours ago 1.13GB 2026-04-10 01:14:59.528317 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 dd2b3fb7f1cd 45 hours ago 1.24GB 2026-04-10 01:14:59.528323 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 5b4036922655 45 hours ago 1.03GB 2026-04-10 01:14:59.528329 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 dbdb26832643 45 hours ago 1.05GB 2026-04-10 01:14:59.528336 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 c4d30b7728c1 45 hours ago 1.03GB 2026-04-10 01:14:59.528343 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 47c9d89c9659 45 hours ago 1.05GB 2026-04-10 01:14:59.528349 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 7ecfa7c2d4c0 45 hours ago 1.03GB 2026-04-10 01:14:59.528372 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 cf471ac8c087 45 hours ago 1.1GB 2026-04-10 01:14:59.528379 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 6c2771325ef1 45 hours ago 989MB 2026-04-10 01:14:59.528385 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 1ca54700db6e 45 hours ago 983MB 2026-04-10 01:14:59.528407 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 f12ce9cf8572 45 hours ago 984MB 2026-04-10 01:14:59.528414 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 71d34dfb5386 45 hours ago 984MB 2026-04-10 01:14:59.528420 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 49c5d2e5a9c9 45 hours ago 989MB 2026-04-10 01:14:59.528426 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 e0b3465740e7 45 hours ago 984MB 2026-04-10 01:14:59.528432 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 21078444d17b 45 hours ago 1.21GB 2026-04-10 01:14:59.528437 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 43d66ea212d8 45 hours ago 1.37GB 2026-04-10 01:14:59.528443 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 7fd0568028b5 45 hours ago 1.21GB 2026-04-10 01:14:59.528450 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 9c2a462a150e 45 hours ago 1.21GB 2026-04-10 01:14:59.528484 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 000506bf22df 45 hours ago 840MB 2026-04-10 01:14:59.528492 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 dacd1a06688c 45 hours ago 840MB 2026-04-10 01:14:59.528498 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 edddd92cc686 47 hours ago 1.56GB 2026-04-10 01:14:59.528504 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 15bb65e2b02e 47 hours ago 1.53GB 2026-04-10 01:14:59.528511 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 ddd2e742b66d 47 hours ago 276MB 2026-04-10 01:14:59.528517 | orchestrator | registry.osism.tech/kolla/cron 2024.2 3264740a29b5 47 hours ago 265MB 2026-04-10 01:14:59.528523 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 289da7c7eeb7 47 hours ago 322MB 2026-04-10 01:14:59.528529 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 987bccb7e29c 47 hours ago 1.03GB 2026-04-10 01:14:59.528535 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 c28009080316 47 hours ago 274MB 2026-04-10 01:14:59.528541 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 d9085bb7b182 47 hours ago 411MB 2026-04-10 01:14:59.528547 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 b8a664c9cb1b 47 hours ago 579MB 2026-04-10 01:14:59.528562 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 e0a7aa0c103d 47 hours ago 668MB 2026-04-10 01:14:59.528569 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 0f4a765fdbd2 47 hours ago 1.15GB 2026-04-10 01:14:59.693355 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-10 01:14:59.694790 | orchestrator | ++ semver latest 5.0.0 2026-04-10 01:14:59.780043 | orchestrator | 2026-04-10 01:14:59.780117 | orchestrator | ## Containers @ testbed-node-2 2026-04-10 01:14:59.780124 | orchestrator | 2026-04-10 01:14:59.780129 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-10 01:14:59.780133 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-10 01:14:59.780138 | orchestrator | + echo 2026-04-10 01:14:59.780142 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-04-10 01:14:59.780149 | orchestrator | + echo 2026-04-10 01:14:59.780153 | orchestrator | + osism container testbed-node-2 ps 2026-04-10 01:15:01.352913 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-10 01:15:01.353007 | orchestrator | 9e0fb082342a registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-04-10 01:15:01.353020 | orchestrator | 151ca53de9f2 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-04-10 01:15:01.353028 | orchestrator | 3a39e3b0e6ba registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-04-10 01:15:01.353034 | orchestrator | 1dfde9dd7c94 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-04-10 01:15:01.353041 | orchestrator | ae824e57ea30 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-04-10 01:15:01.353048 | orchestrator | 054e45e695ec registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-04-10 01:15:01.353055 | orchestrator | 9a13a396032d registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2026-04-10 01:15:01.353062 | orchestrator | f6f8b9c2bf65 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2026-04-10 01:15:01.353069 | orchestrator | 3b7ac0a51c18 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2026-04-10 01:15:01.353076 | orchestrator | 663f4271275f registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2026-04-10 01:15:01.353083 | orchestrator | e21ce413447f registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2026-04-10 01:15:01.353091 | orchestrator | 6ec8d47ceb8a registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2026-04-10 01:15:01.353101 | orchestrator | ffca94c19a81 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2026-04-10 01:15:01.353108 | orchestrator | c866cf689c18 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2026-04-10 01:15:01.353133 | orchestrator | 60da3f17d77c registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2026-04-10 01:15:01.353202 | orchestrator | 8b4ca759fde9 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2026-04-10 01:15:01.353208 | orchestrator | 4682fd8837f6 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2026-04-10 01:15:01.353212 | orchestrator | c6003c13a950 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2026-04-10 01:15:01.353216 | orchestrator | ae82260e1910 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2026-04-10 01:15:01.353220 | orchestrator | 0afba111d793 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-04-10 01:15:01.353224 | orchestrator | fa8b63e82bee registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2026-04-10 01:15:01.353241 | orchestrator | a5e1def4ebc7 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2026-04-10 01:15:01.353245 | orchestrator | 7c54c0f70ede registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2026-04-10 01:15:01.353249 | orchestrator | e6e68129cb27 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_backup 2026-04-10 01:15:01.353253 | orchestrator | e9ec25c42625 registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_volume 2026-04-10 01:15:01.353257 | orchestrator | d75dd2a6fc2b registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2026-04-10 01:15:01.353261 | orchestrator | 2513699051a5 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-04-10 01:15:01.353264 | orchestrator | a30c90cc34ca registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2026-04-10 01:15:01.353268 | orchestrator | 91870c1a85db registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2026-04-10 01:15:01.353273 | orchestrator | 5366bedd5800 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2026-04-10 01:15:01.353277 | orchestrator | 4707610db789 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2026-04-10 01:15:01.353281 | orchestrator | b6f952a9a520 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2026-04-10 01:15:01.353285 | orchestrator | a2cca3c39397 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-04-10 01:15:01.353289 | orchestrator | f688d2940a24 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-2 2026-04-10 01:15:01.353297 | orchestrator | 970b66819d30 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2026-04-10 01:15:01.353301 | orchestrator | 7ed8ca381bb4 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_fernet 2026-04-10 01:15:01.353304 | orchestrator | 4bc1c6126b8c registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 16 minutes (healthy) keystone_ssh 2026-04-10 01:15:01.353308 | orchestrator | a025f718daec registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2026-04-10 01:15:01.353312 | orchestrator | bad4da388b79 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2026-04-10 01:15:01.353316 | orchestrator | 88025ba7ff64 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2026-04-10 01:15:01.353320 | orchestrator | 5de90b3048db registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2026-04-10 01:15:01.353324 | orchestrator | d2a1693f517f registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-2 2026-04-10 01:15:01.353328 | orchestrator | e058a9b8a236 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2026-04-10 01:15:01.353332 | orchestrator | ee7fd8adfd75 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2026-04-10 01:15:01.353340 | orchestrator | 68324436ad50 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2026-04-10 01:15:01.353345 | orchestrator | f44c4c5267f8 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_northd 2026-04-10 01:15:01.353349 | orchestrator | 1cd5a87da189 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_sb_db 2026-04-10 01:15:01.353353 | orchestrator | 2210bdd25786 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_nb_db 2026-04-10 01:15:01.353357 | orchestrator | cc239c1ec819 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) rabbitmq 2026-04-10 01:15:01.353373 | orchestrator | 277ac8566ad1 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 25 minutes ago Up 25 minutes ceph-mon-testbed-node-2 2026-04-10 01:15:01.353377 | orchestrator | 92864e5adacd registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_controller 2026-04-10 01:15:01.353381 | orchestrator | 0375f082c6d9 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes (healthy) openvswitch_vswitchd 2026-04-10 01:15:01.353385 | orchestrator | 93dddcffb161 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2026-04-10 01:15:01.353389 | orchestrator | d26e7d39f082 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2026-04-10 01:15:01.353396 | orchestrator | 1033dc7d63ef registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2026-04-10 01:15:01.353400 | orchestrator | 8f5941989639 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2026-04-10 01:15:01.353404 | orchestrator | d516b9e8b805 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2026-04-10 01:15:01.353408 | orchestrator | 91b3f57a6b36 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2026-04-10 01:15:01.353412 | orchestrator | 6036b4aed241 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes fluentd 2026-04-10 01:15:01.517806 | orchestrator | 2026-04-10 01:15:01.517890 | orchestrator | ## Images @ testbed-node-2 2026-04-10 01:15:01.517902 | orchestrator | 2026-04-10 01:15:01.517908 | orchestrator | + echo 2026-04-10 01:15:01.517915 | orchestrator | + echo '## Images @ testbed-node-2' 2026-04-10 01:15:01.517922 | orchestrator | + echo 2026-04-10 01:15:01.517929 | orchestrator | + osism container testbed-node-2 images 2026-04-10 01:15:02.991569 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-10 01:15:02.991646 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 45025d19a8a7 16 hours ago 848MB 2026-04-10 01:15:02.991652 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 cd763de3e75b 16 hours ago 848MB 2026-04-10 01:15:02.991710 | orchestrator | registry.osism.tech/osism/ceph-daemon reef da85555ca3b8 16 hours ago 1.35GB 2026-04-10 01:15:02.991717 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 4c5bda7121dd 45 hours ago 266MB 2026-04-10 01:15:02.991724 | orchestrator | registry.osism.tech/kolla/redis 2024.2 1b423135131d 45 hours ago 273MB 2026-04-10 01:15:02.991732 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 20e411de4aa7 45 hours ago 273MB 2026-04-10 01:15:02.991742 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 ed0a26f28f7c 45 hours ago 452MB 2026-04-10 01:15:02.991748 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 799699931a41 45 hours ago 298MB 2026-04-10 01:15:02.991754 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 b70ba58fb0aa 45 hours ago 357MB 2026-04-10 01:15:02.991761 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 8b7b44f2563a 45 hours ago 292MB 2026-04-10 01:15:02.991767 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 58219dd9eee5 45 hours ago 301MB 2026-04-10 01:15:02.991773 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 e750d96ecfc5 45 hours ago 306MB 2026-04-10 01:15:02.991780 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 304562932cfa 45 hours ago 279MB 2026-04-10 01:15:02.991786 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 e3352e08634e 45 hours ago 279MB 2026-04-10 01:15:02.991792 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 b3cfae2d4a21 45 hours ago 975MB 2026-04-10 01:15:02.991799 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 aefbc46ee397 45 hours ago 1.4GB 2026-04-10 01:15:02.991805 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 61b46b13fe15 45 hours ago 1.41GB 2026-04-10 01:15:02.991811 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 6cafd41453ca 45 hours ago 1.41GB 2026-04-10 01:15:02.991817 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 31ecb4717921 45 hours ago 1.72GB 2026-04-10 01:15:02.991840 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 0e8d7891d417 45 hours ago 990MB 2026-04-10 01:15:02.991846 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 d04f4045e6e0 45 hours ago 991MB 2026-04-10 01:15:02.991853 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 e3fd7619dfad 45 hours ago 991MB 2026-04-10 01:15:02.991859 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 e66da8e4e8e4 45 hours ago 1.16GB 2026-04-10 01:15:02.991866 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 173d7508c3d0 45 hours ago 1.04GB 2026-04-10 01:15:02.991872 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 e625c11d2aba 45 hours ago 1.04GB 2026-04-10 01:15:02.991878 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 5e4193e479dd 45 hours ago 1.07GB 2026-04-10 01:15:02.991885 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 133764135858 45 hours ago 1.13GB 2026-04-10 01:15:02.991891 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 dd2b3fb7f1cd 45 hours ago 1.24GB 2026-04-10 01:15:02.991904 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 5b4036922655 45 hours ago 1.03GB 2026-04-10 01:15:02.991910 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 dbdb26832643 45 hours ago 1.05GB 2026-04-10 01:15:02.991916 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 c4d30b7728c1 45 hours ago 1.03GB 2026-04-10 01:15:02.991921 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 47c9d89c9659 45 hours ago 1.05GB 2026-04-10 01:15:02.991928 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 7ecfa7c2d4c0 45 hours ago 1.03GB 2026-04-10 01:15:02.991934 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 cf471ac8c087 45 hours ago 1.1GB 2026-04-10 01:15:02.991940 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 6c2771325ef1 45 hours ago 989MB 2026-04-10 01:15:02.991946 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 1ca54700db6e 45 hours ago 983MB 2026-04-10 01:15:02.991968 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 f12ce9cf8572 45 hours ago 984MB 2026-04-10 01:15:02.991978 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 71d34dfb5386 45 hours ago 984MB 2026-04-10 01:15:02.991984 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 49c5d2e5a9c9 45 hours ago 989MB 2026-04-10 01:15:02.991990 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 e0b3465740e7 45 hours ago 984MB 2026-04-10 01:15:02.991997 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 21078444d17b 45 hours ago 1.21GB 2026-04-10 01:15:02.992003 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 43d66ea212d8 45 hours ago 1.37GB 2026-04-10 01:15:02.992010 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 7fd0568028b5 45 hours ago 1.21GB 2026-04-10 01:15:02.992016 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 9c2a462a150e 45 hours ago 1.21GB 2026-04-10 01:15:02.992023 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 000506bf22df 45 hours ago 840MB 2026-04-10 01:15:02.992029 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 dacd1a06688c 45 hours ago 840MB 2026-04-10 01:15:02.992035 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 edddd92cc686 47 hours ago 1.56GB 2026-04-10 01:15:02.992041 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 15bb65e2b02e 47 hours ago 1.53GB 2026-04-10 01:15:02.992069 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 ddd2e742b66d 47 hours ago 276MB 2026-04-10 01:15:02.992114 | orchestrator | registry.osism.tech/kolla/cron 2024.2 3264740a29b5 47 hours ago 265MB 2026-04-10 01:15:02.992122 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 289da7c7eeb7 47 hours ago 322MB 2026-04-10 01:15:02.992128 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 987bccb7e29c 47 hours ago 1.03GB 2026-04-10 01:15:02.992134 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 c28009080316 47 hours ago 274MB 2026-04-10 01:15:02.992140 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 d9085bb7b182 47 hours ago 411MB 2026-04-10 01:15:02.992146 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 b8a664c9cb1b 47 hours ago 579MB 2026-04-10 01:15:02.992151 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 e0a7aa0c103d 47 hours ago 668MB 2026-04-10 01:15:02.992157 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 0f4a765fdbd2 47 hours ago 1.15GB 2026-04-10 01:15:03.140798 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-04-10 01:15:03.148351 | orchestrator | + set -e 2026-04-10 01:15:03.148422 | orchestrator | + source /opt/manager-vars.sh 2026-04-10 01:15:03.148976 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-10 01:15:03.149012 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-10 01:15:03.149019 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-10 01:15:03.149024 | orchestrator | ++ CEPH_VERSION=reef 2026-04-10 01:15:03.149029 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-10 01:15:03.149119 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-10 01:15:03.149125 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-10 01:15:03.149130 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-10 01:15:03.149135 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-10 01:15:03.149140 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-10 01:15:03.149144 | orchestrator | ++ export ARA=false 2026-04-10 01:15:03.149149 | orchestrator | ++ ARA=false 2026-04-10 01:15:03.149154 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-10 01:15:03.149159 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-10 01:15:03.149163 | orchestrator | ++ export TEMPEST=true 2026-04-10 01:15:03.149168 | orchestrator | ++ TEMPEST=true 2026-04-10 01:15:03.149172 | orchestrator | ++ export IS_ZUUL=true 2026-04-10 01:15:03.149177 | orchestrator | ++ IS_ZUUL=true 2026-04-10 01:15:03.149181 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.34 2026-04-10 01:15:03.149186 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.34 2026-04-10 01:15:03.149190 | orchestrator | ++ export EXTERNAL_API=false 2026-04-10 01:15:03.149195 | orchestrator | ++ EXTERNAL_API=false 2026-04-10 01:15:03.149199 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-10 01:15:03.149203 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-10 01:15:03.149208 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-10 01:15:03.149212 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-10 01:15:03.149217 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-10 01:15:03.149221 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-10 01:15:03.149226 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-10 01:15:03.149230 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-04-10 01:15:03.157511 | orchestrator | + set -e 2026-04-10 01:15:03.157599 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-10 01:15:03.157610 | orchestrator | ++ export INTERACTIVE=false 2026-04-10 01:15:03.157619 | orchestrator | ++ INTERACTIVE=false 2026-04-10 01:15:03.157627 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-10 01:15:03.157633 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-10 01:15:03.157639 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-10 01:15:03.158809 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-10 01:15:03.163929 | orchestrator | 2026-04-10 01:15:03.164004 | orchestrator | # Ceph status 2026-04-10 01:15:03.164013 | orchestrator | 2026-04-10 01:15:03.164020 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-10 01:15:03.164043 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-10 01:15:03.164050 | orchestrator | + echo 2026-04-10 01:15:03.164064 | orchestrator | + echo '# Ceph status' 2026-04-10 01:15:03.164070 | orchestrator | + echo 2026-04-10 01:15:03.164076 | orchestrator | + ceph -s 2026-04-10 01:15:03.740881 | orchestrator | cluster: 2026-04-10 01:15:03.740984 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-04-10 01:15:03.740997 | orchestrator | health: HEALTH_OK 2026-04-10 01:15:03.741007 | orchestrator | 2026-04-10 01:15:03.741013 | orchestrator | services: 2026-04-10 01:15:03.741019 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 25m) 2026-04-10 01:15:03.741028 | orchestrator | mgr: testbed-node-2(active, since 15m), standbys: testbed-node-1, testbed-node-0 2026-04-10 01:15:03.741036 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-04-10 01:15:03.741043 | orchestrator | osd: 6 osds: 6 up (since 22m), 6 in (since 23m) 2026-04-10 01:15:03.741049 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-04-10 01:15:03.741055 | orchestrator | 2026-04-10 01:15:03.741062 | orchestrator | data: 2026-04-10 01:15:03.741068 | orchestrator | volumes: 1/1 healthy 2026-04-10 01:15:03.741073 | orchestrator | pools: 14 pools, 401 pgs 2026-04-10 01:15:03.741079 | orchestrator | objects: 556 objects, 2.2 GiB 2026-04-10 01:15:03.741085 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-04-10 01:15:03.741091 | orchestrator | pgs: 401 active+clean 2026-04-10 01:15:03.741097 | orchestrator | 2026-04-10 01:15:03.741103 | orchestrator | io: 2026-04-10 01:15:03.741109 | orchestrator | client: 99 KiB/s rd, 0 B/s wr, 98 op/s rd, 65 op/s wr 2026-04-10 01:15:03.741115 | orchestrator | 2026-04-10 01:15:03.790316 | orchestrator | 2026-04-10 01:15:03.790419 | orchestrator | # Ceph versions 2026-04-10 01:15:03.790428 | orchestrator | 2026-04-10 01:15:03.790434 | orchestrator | + echo 2026-04-10 01:15:03.790441 | orchestrator | + echo '# Ceph versions' 2026-04-10 01:15:03.790449 | orchestrator | + echo 2026-04-10 01:15:03.790473 | orchestrator | + ceph versions 2026-04-10 01:15:04.354978 | orchestrator | { 2026-04-10 01:15:04.355072 | orchestrator | "mon": { 2026-04-10 01:15:04.355083 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-10 01:15:04.355091 | orchestrator | }, 2026-04-10 01:15:04.355097 | orchestrator | "mgr": { 2026-04-10 01:15:04.355133 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-10 01:15:04.355139 | orchestrator | }, 2026-04-10 01:15:04.355143 | orchestrator | "osd": { 2026-04-10 01:15:04.355147 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 6 2026-04-10 01:15:04.355151 | orchestrator | }, 2026-04-10 01:15:04.355155 | orchestrator | "mds": { 2026-04-10 01:15:04.355160 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-10 01:15:04.355164 | orchestrator | }, 2026-04-10 01:15:04.355168 | orchestrator | "rgw": { 2026-04-10 01:15:04.355172 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-10 01:15:04.355176 | orchestrator | }, 2026-04-10 01:15:04.355180 | orchestrator | "overall": { 2026-04-10 01:15:04.355185 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 18 2026-04-10 01:15:04.355189 | orchestrator | } 2026-04-10 01:15:04.355193 | orchestrator | } 2026-04-10 01:15:04.402896 | orchestrator | 2026-04-10 01:15:04.402967 | orchestrator | # Ceph OSD tree 2026-04-10 01:15:04.402974 | orchestrator | 2026-04-10 01:15:04.402978 | orchestrator | + echo 2026-04-10 01:15:04.402983 | orchestrator | + echo '# Ceph OSD tree' 2026-04-10 01:15:04.402988 | orchestrator | + echo 2026-04-10 01:15:04.402992 | orchestrator | + ceph osd df tree 2026-04-10 01:15:04.941052 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-04-10 01:15:04.941192 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2026-04-10 01:15:04.941203 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.91 1.00 - host testbed-node-3 2026-04-10 01:15:04.941208 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.5 GiB 1 KiB 70 MiB 18 GiB 7.66 1.30 200 up osd.0 2026-04-10 01:15:04.941212 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 852 MiB 778 MiB 1 KiB 74 MiB 19 GiB 4.16 0.70 190 up osd.4 2026-04-10 01:15:04.941216 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2026-04-10 01:15:04.941220 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 901 MiB 827 MiB 1 KiB 74 MiB 19 GiB 4.40 0.74 176 up osd.1 2026-04-10 01:15:04.941244 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 70 MiB 18 GiB 7.43 1.26 216 up osd.3 2026-04-10 01:15:04.941248 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2026-04-10 01:15:04.941252 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.30 1.06 191 up osd.2 2026-04-10 01:15:04.941256 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 74 MiB 19 GiB 5.54 0.94 197 up osd.5 2026-04-10 01:15:04.941259 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2026-04-10 01:15:04.941263 | orchestrator | MIN/MAX VAR: 0.70/1.30 STDDEV: 1.35 2026-04-10 01:15:04.989697 | orchestrator | 2026-04-10 01:15:04.989784 | orchestrator | # Ceph monitor status 2026-04-10 01:15:04.989795 | orchestrator | 2026-04-10 01:15:04.989802 | orchestrator | + echo 2026-04-10 01:15:04.989808 | orchestrator | + echo '# Ceph monitor status' 2026-04-10 01:15:04.989815 | orchestrator | + echo 2026-04-10 01:15:04.989823 | orchestrator | + ceph mon stat 2026-04-10 01:15:05.614206 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 4, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-04-10 01:15:05.671340 | orchestrator | 2026-04-10 01:15:05.671399 | orchestrator | # Ceph quorum status 2026-04-10 01:15:05.671409 | orchestrator | 2026-04-10 01:15:05.671416 | orchestrator | + echo 2026-04-10 01:15:05.671423 | orchestrator | + echo '# Ceph quorum status' 2026-04-10 01:15:05.671429 | orchestrator | + echo 2026-04-10 01:15:05.672180 | orchestrator | + ceph quorum_status 2026-04-10 01:15:05.672284 | orchestrator | + jq 2026-04-10 01:15:06.319333 | orchestrator | { 2026-04-10 01:15:06.319438 | orchestrator | "election_epoch": 4, 2026-04-10 01:15:06.319482 | orchestrator | "quorum": [ 2026-04-10 01:15:06.319507 | orchestrator | 0, 2026-04-10 01:15:06.319516 | orchestrator | 1, 2026-04-10 01:15:06.319525 | orchestrator | 2 2026-04-10 01:15:06.319534 | orchestrator | ], 2026-04-10 01:15:06.319543 | orchestrator | "quorum_names": [ 2026-04-10 01:15:06.319552 | orchestrator | "testbed-node-0", 2026-04-10 01:15:06.319561 | orchestrator | "testbed-node-1", 2026-04-10 01:15:06.319570 | orchestrator | "testbed-node-2" 2026-04-10 01:15:06.319579 | orchestrator | ], 2026-04-10 01:15:06.319588 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-04-10 01:15:06.319597 | orchestrator | "quorum_age": 1545, 2026-04-10 01:15:06.319606 | orchestrator | "features": { 2026-04-10 01:15:06.319615 | orchestrator | "quorum_con": "4540138322906710015", 2026-04-10 01:15:06.319624 | orchestrator | "quorum_mon": [ 2026-04-10 01:15:06.319633 | orchestrator | "kraken", 2026-04-10 01:15:06.319642 | orchestrator | "luminous", 2026-04-10 01:15:06.319651 | orchestrator | "mimic", 2026-04-10 01:15:06.319659 | orchestrator | "osdmap-prune", 2026-04-10 01:15:06.319668 | orchestrator | "nautilus", 2026-04-10 01:15:06.319677 | orchestrator | "octopus", 2026-04-10 01:15:06.319686 | orchestrator | "pacific", 2026-04-10 01:15:06.319694 | orchestrator | "elector-pinging", 2026-04-10 01:15:06.319703 | orchestrator | "quincy", 2026-04-10 01:15:06.319712 | orchestrator | "reef" 2026-04-10 01:15:06.319721 | orchestrator | ] 2026-04-10 01:15:06.319730 | orchestrator | }, 2026-04-10 01:15:06.319738 | orchestrator | "monmap": { 2026-04-10 01:15:06.319747 | orchestrator | "epoch": 1, 2026-04-10 01:15:06.319756 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-04-10 01:15:06.319766 | orchestrator | "modified": "2026-04-10T00:49:06.471685Z", 2026-04-10 01:15:06.319775 | orchestrator | "created": "2026-04-10T00:49:06.471685Z", 2026-04-10 01:15:06.319783 | orchestrator | "min_mon_release": 18, 2026-04-10 01:15:06.319792 | orchestrator | "min_mon_release_name": "reef", 2026-04-10 01:15:06.319810 | orchestrator | "election_strategy": 1, 2026-04-10 01:15:06.319819 | orchestrator | "disallowed_leaders": "", 2026-04-10 01:15:06.319827 | orchestrator | "stretch_mode": false, 2026-04-10 01:15:06.319836 | orchestrator | "tiebreaker_mon": "", 2026-04-10 01:15:06.319845 | orchestrator | "removed_ranks": "", 2026-04-10 01:15:06.319855 | orchestrator | "features": { 2026-04-10 01:15:06.319865 | orchestrator | "persistent": [ 2026-04-10 01:15:06.319875 | orchestrator | "kraken", 2026-04-10 01:15:06.319904 | orchestrator | "luminous", 2026-04-10 01:15:06.319915 | orchestrator | "mimic", 2026-04-10 01:15:06.319924 | orchestrator | "osdmap-prune", 2026-04-10 01:15:06.319934 | orchestrator | "nautilus", 2026-04-10 01:15:06.319944 | orchestrator | "octopus", 2026-04-10 01:15:06.319954 | orchestrator | "pacific", 2026-04-10 01:15:06.319964 | orchestrator | "elector-pinging", 2026-04-10 01:15:06.319974 | orchestrator | "quincy", 2026-04-10 01:15:06.319985 | orchestrator | "reef" 2026-04-10 01:15:06.319994 | orchestrator | ], 2026-04-10 01:15:06.320004 | orchestrator | "optional": [] 2026-04-10 01:15:06.320014 | orchestrator | }, 2026-04-10 01:15:06.320024 | orchestrator | "mons": [ 2026-04-10 01:15:06.320034 | orchestrator | { 2026-04-10 01:15:06.320044 | orchestrator | "rank": 0, 2026-04-10 01:15:06.320055 | orchestrator | "name": "testbed-node-0", 2026-04-10 01:15:06.320064 | orchestrator | "public_addrs": { 2026-04-10 01:15:06.320075 | orchestrator | "addrvec": [ 2026-04-10 01:15:06.320085 | orchestrator | { 2026-04-10 01:15:06.320095 | orchestrator | "type": "v2", 2026-04-10 01:15:06.320105 | orchestrator | "addr": "192.168.16.10:3300", 2026-04-10 01:15:06.320115 | orchestrator | "nonce": 0 2026-04-10 01:15:06.320125 | orchestrator | }, 2026-04-10 01:15:06.320134 | orchestrator | { 2026-04-10 01:15:06.320144 | orchestrator | "type": "v1", 2026-04-10 01:15:06.320155 | orchestrator | "addr": "192.168.16.10:6789", 2026-04-10 01:15:06.320165 | orchestrator | "nonce": 0 2026-04-10 01:15:06.320176 | orchestrator | } 2026-04-10 01:15:06.320186 | orchestrator | ] 2026-04-10 01:15:06.320196 | orchestrator | }, 2026-04-10 01:15:06.320205 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-04-10 01:15:06.320213 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-04-10 01:15:06.320222 | orchestrator | "priority": 0, 2026-04-10 01:15:06.320230 | orchestrator | "weight": 0, 2026-04-10 01:15:06.320239 | orchestrator | "crush_location": "{}" 2026-04-10 01:15:06.320248 | orchestrator | }, 2026-04-10 01:15:06.320256 | orchestrator | { 2026-04-10 01:15:06.320265 | orchestrator | "rank": 1, 2026-04-10 01:15:06.320274 | orchestrator | "name": "testbed-node-1", 2026-04-10 01:15:06.320282 | orchestrator | "public_addrs": { 2026-04-10 01:15:06.320291 | orchestrator | "addrvec": [ 2026-04-10 01:15:06.320300 | orchestrator | { 2026-04-10 01:15:06.320308 | orchestrator | "type": "v2", 2026-04-10 01:15:06.320317 | orchestrator | "addr": "192.168.16.11:3300", 2026-04-10 01:15:06.320326 | orchestrator | "nonce": 0 2026-04-10 01:15:06.320334 | orchestrator | }, 2026-04-10 01:15:06.320343 | orchestrator | { 2026-04-10 01:15:06.320352 | orchestrator | "type": "v1", 2026-04-10 01:15:06.320360 | orchestrator | "addr": "192.168.16.11:6789", 2026-04-10 01:15:06.320418 | orchestrator | "nonce": 0 2026-04-10 01:15:06.320429 | orchestrator | } 2026-04-10 01:15:06.320440 | orchestrator | ] 2026-04-10 01:15:06.320556 | orchestrator | }, 2026-04-10 01:15:06.320570 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-04-10 01:15:06.320578 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-04-10 01:15:06.320587 | orchestrator | "priority": 0, 2026-04-10 01:15:06.320596 | orchestrator | "weight": 0, 2026-04-10 01:15:06.320605 | orchestrator | "crush_location": "{}" 2026-04-10 01:15:06.320613 | orchestrator | }, 2026-04-10 01:15:06.320622 | orchestrator | { 2026-04-10 01:15:06.320630 | orchestrator | "rank": 2, 2026-04-10 01:15:06.320639 | orchestrator | "name": "testbed-node-2", 2026-04-10 01:15:06.320648 | orchestrator | "public_addrs": { 2026-04-10 01:15:06.320657 | orchestrator | "addrvec": [ 2026-04-10 01:15:06.320665 | orchestrator | { 2026-04-10 01:15:06.320674 | orchestrator | "type": "v2", 2026-04-10 01:15:06.320683 | orchestrator | "addr": "192.168.16.12:3300", 2026-04-10 01:15:06.320691 | orchestrator | "nonce": 0 2026-04-10 01:15:06.320700 | orchestrator | }, 2026-04-10 01:15:06.320709 | orchestrator | { 2026-04-10 01:15:06.320717 | orchestrator | "type": "v1", 2026-04-10 01:15:06.320726 | orchestrator | "addr": "192.168.16.12:6789", 2026-04-10 01:15:06.320735 | orchestrator | "nonce": 0 2026-04-10 01:15:06.320743 | orchestrator | } 2026-04-10 01:15:06.320752 | orchestrator | ] 2026-04-10 01:15:06.320761 | orchestrator | }, 2026-04-10 01:15:06.320769 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-04-10 01:15:06.320778 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-04-10 01:15:06.320796 | orchestrator | "priority": 0, 2026-04-10 01:15:06.320805 | orchestrator | "weight": 0, 2026-04-10 01:15:06.320814 | orchestrator | "crush_location": "{}" 2026-04-10 01:15:06.320823 | orchestrator | } 2026-04-10 01:15:06.320831 | orchestrator | ] 2026-04-10 01:15:06.320840 | orchestrator | } 2026-04-10 01:15:06.320849 | orchestrator | } 2026-04-10 01:15:06.320976 | orchestrator | 2026-04-10 01:15:06.320989 | orchestrator | # Ceph free space status 2026-04-10 01:15:06.320998 | orchestrator | 2026-04-10 01:15:06.321007 | orchestrator | + echo 2026-04-10 01:15:06.321016 | orchestrator | + echo '# Ceph free space status' 2026-04-10 01:15:06.321025 | orchestrator | + echo 2026-04-10 01:15:06.321034 | orchestrator | + ceph df 2026-04-10 01:15:06.900654 | orchestrator | --- RAW STORAGE --- 2026-04-10 01:15:06.900724 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-04-10 01:15:06.900738 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2026-04-10 01:15:06.900744 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2026-04-10 01:15:06.900749 | orchestrator | 2026-04-10 01:15:06.900754 | orchestrator | --- POOLS --- 2026-04-10 01:15:06.900759 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-04-10 01:15:06.900764 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2026-04-10 01:15:06.900769 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-04-10 01:15:06.900774 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-04-10 01:15:06.900778 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-04-10 01:15:06.900783 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-04-10 01:15:06.900787 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-04-10 01:15:06.900792 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-04-10 01:15:06.900797 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-04-10 01:15:06.900801 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 52 GiB 2026-04-10 01:15:06.900806 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-04-10 01:15:06.900811 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-04-10 01:15:06.900815 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.98 35 GiB 2026-04-10 01:15:06.900820 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-04-10 01:15:06.900824 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-04-10 01:15:06.952101 | orchestrator | ++ semver latest 5.0.0 2026-04-10 01:15:07.005215 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-10 01:15:07.005280 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-10 01:15:07.005295 | orchestrator | + osism apply facts 2026-04-10 01:15:18.353188 | orchestrator | 2026-04-10 01:15:18 | INFO  | Prepare task for execution of facts. 2026-04-10 01:15:18.428500 | orchestrator | 2026-04-10 01:15:18 | INFO  | Task 8d4491fb-bc8b-4cf8-937a-c06d842cb6df (facts) was prepared for execution. 2026-04-10 01:15:18.428682 | orchestrator | 2026-04-10 01:15:18 | INFO  | It takes a moment until task 8d4491fb-bc8b-4cf8-937a-c06d842cb6df (facts) has been started and output is visible here. 2026-04-10 01:15:31.068808 | orchestrator | 2026-04-10 01:15:31.068864 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-10 01:15:31.068870 | orchestrator | 2026-04-10 01:15:31.068874 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-10 01:15:31.068878 | orchestrator | Friday 10 April 2026 01:15:21 +0000 (0:00:00.350) 0:00:00.350 ********** 2026-04-10 01:15:31.068882 | orchestrator | ok: [testbed-manager] 2026-04-10 01:15:31.068887 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:15:31.068891 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:15:31.068895 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:15:31.068899 | orchestrator | ok: [testbed-node-3] 2026-04-10 01:15:31.068903 | orchestrator | ok: [testbed-node-5] 2026-04-10 01:15:31.068907 | orchestrator | ok: [testbed-node-4] 2026-04-10 01:15:31.068922 | orchestrator | 2026-04-10 01:15:31.068926 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-10 01:15:31.068936 | orchestrator | Friday 10 April 2026 01:15:23 +0000 (0:00:01.395) 0:00:01.746 ********** 2026-04-10 01:15:31.068940 | orchestrator | skipping: [testbed-manager] 2026-04-10 01:15:31.068945 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:15:31.068949 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:15:31.068953 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:15:31.068956 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:15:31.068960 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:15:31.068964 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:15:31.068968 | orchestrator | 2026-04-10 01:15:31.068972 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-10 01:15:31.068976 | orchestrator | 2026-04-10 01:15:31.068979 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-10 01:15:31.068983 | orchestrator | Friday 10 April 2026 01:15:24 +0000 (0:00:01.275) 0:00:03.022 ********** 2026-04-10 01:15:31.068987 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:15:31.068991 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:15:31.068995 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:15:31.068998 | orchestrator | ok: [testbed-manager] 2026-04-10 01:15:31.069002 | orchestrator | ok: [testbed-node-3] 2026-04-10 01:15:31.069006 | orchestrator | ok: [testbed-node-5] 2026-04-10 01:15:31.069010 | orchestrator | ok: [testbed-node-4] 2026-04-10 01:15:31.069014 | orchestrator | 2026-04-10 01:15:31.069017 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-10 01:15:31.069021 | orchestrator | 2026-04-10 01:15:31.069025 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-10 01:15:31.069029 | orchestrator | Friday 10 April 2026 01:15:29 +0000 (0:00:05.437) 0:00:08.460 ********** 2026-04-10 01:15:31.069033 | orchestrator | skipping: [testbed-manager] 2026-04-10 01:15:31.069037 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:15:31.069044 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:15:31.069054 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:15:31.069062 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:15:31.069068 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:15:31.069074 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:15:31.069081 | orchestrator | 2026-04-10 01:15:31.069088 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 01:15:31.069094 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 01:15:31.069102 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 01:15:31.069106 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 01:15:31.069110 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 01:15:31.069114 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 01:15:31.069117 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 01:15:31.069121 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 01:15:31.069125 | orchestrator | 2026-04-10 01:15:31.069132 | orchestrator | 2026-04-10 01:15:31.069138 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 01:15:31.069144 | orchestrator | Friday 10 April 2026 01:15:30 +0000 (0:00:00.792) 0:00:09.252 ********** 2026-04-10 01:15:31.069155 | orchestrator | =============================================================================== 2026-04-10 01:15:31.069162 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.44s 2026-04-10 01:15:31.069168 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.40s 2026-04-10 01:15:31.069174 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.28s 2026-04-10 01:15:31.069181 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.79s 2026-04-10 01:15:31.252641 | orchestrator | + osism validate ceph-mons 2026-04-10 01:16:02.097286 | orchestrator | 2026-04-10 01:16:02.097361 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-04-10 01:16:02.097368 | orchestrator | 2026-04-10 01:16:02.097373 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-10 01:16:02.097377 | orchestrator | Friday 10 April 2026 01:15:46 +0000 (0:00:00.522) 0:00:00.522 ********** 2026-04-10 01:16:02.097382 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-10 01:16:02.097386 | orchestrator | 2026-04-10 01:16:02.097390 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-10 01:16:02.097395 | orchestrator | Friday 10 April 2026 01:15:47 +0000 (0:00:00.999) 0:00:01.522 ********** 2026-04-10 01:16:02.097399 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-10 01:16:02.097403 | orchestrator | 2026-04-10 01:16:02.097407 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-10 01:16:02.097411 | orchestrator | Friday 10 April 2026 01:15:47 +0000 (0:00:00.718) 0:00:02.240 ********** 2026-04-10 01:16:02.097415 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:16:02.097420 | orchestrator | 2026-04-10 01:16:02.097424 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-10 01:16:02.097453 | orchestrator | Friday 10 April 2026 01:15:48 +0000 (0:00:00.125) 0:00:02.366 ********** 2026-04-10 01:16:02.097460 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:16:02.097466 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:16:02.097473 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:16:02.097477 | orchestrator | 2026-04-10 01:16:02.097481 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-10 01:16:02.097485 | orchestrator | Friday 10 April 2026 01:15:48 +0000 (0:00:00.280) 0:00:02.647 ********** 2026-04-10 01:16:02.097489 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:16:02.097493 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:16:02.097497 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:16:02.097501 | orchestrator | 2026-04-10 01:16:02.097505 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-10 01:16:02.097509 | orchestrator | Friday 10 April 2026 01:15:49 +0000 (0:00:01.525) 0:00:04.172 ********** 2026-04-10 01:16:02.097513 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:16:02.097518 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:16:02.097522 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:16:02.097526 | orchestrator | 2026-04-10 01:16:02.097530 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-10 01:16:02.097534 | orchestrator | Friday 10 April 2026 01:15:50 +0000 (0:00:00.302) 0:00:04.475 ********** 2026-04-10 01:16:02.097538 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:16:02.097542 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:16:02.097545 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:16:02.097549 | orchestrator | 2026-04-10 01:16:02.097553 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-10 01:16:02.097557 | orchestrator | Friday 10 April 2026 01:15:50 +0000 (0:00:00.296) 0:00:04.771 ********** 2026-04-10 01:16:02.097561 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:16:02.097565 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:16:02.097569 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:16:02.097573 | orchestrator | 2026-04-10 01:16:02.097577 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-04-10 01:16:02.097583 | orchestrator | Friday 10 April 2026 01:15:50 +0000 (0:00:00.295) 0:00:05.067 ********** 2026-04-10 01:16:02.097608 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:16:02.097616 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:16:02.097623 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:16:02.097629 | orchestrator | 2026-04-10 01:16:02.097635 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-04-10 01:16:02.097641 | orchestrator | Friday 10 April 2026 01:15:51 +0000 (0:00:00.457) 0:00:05.524 ********** 2026-04-10 01:16:02.097646 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:16:02.097652 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:16:02.097666 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:16:02.097672 | orchestrator | 2026-04-10 01:16:02.097677 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-10 01:16:02.097683 | orchestrator | Friday 10 April 2026 01:15:51 +0000 (0:00:00.310) 0:00:05.835 ********** 2026-04-10 01:16:02.097689 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:16:02.097709 | orchestrator | 2026-04-10 01:16:02.097715 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-10 01:16:02.097729 | orchestrator | Friday 10 April 2026 01:15:51 +0000 (0:00:00.244) 0:00:06.079 ********** 2026-04-10 01:16:02.097735 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:16:02.097741 | orchestrator | 2026-04-10 01:16:02.097747 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-10 01:16:02.097753 | orchestrator | Friday 10 April 2026 01:15:51 +0000 (0:00:00.261) 0:00:06.341 ********** 2026-04-10 01:16:02.097759 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:16:02.097765 | orchestrator | 2026-04-10 01:16:02.097772 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-10 01:16:02.097777 | orchestrator | Friday 10 April 2026 01:15:52 +0000 (0:00:00.249) 0:00:06.590 ********** 2026-04-10 01:16:02.097783 | orchestrator | 2026-04-10 01:16:02.097789 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-10 01:16:02.097795 | orchestrator | Friday 10 April 2026 01:15:52 +0000 (0:00:00.068) 0:00:06.658 ********** 2026-04-10 01:16:02.097801 | orchestrator | 2026-04-10 01:16:02.097807 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-10 01:16:02.097813 | orchestrator | Friday 10 April 2026 01:15:52 +0000 (0:00:00.079) 0:00:06.738 ********** 2026-04-10 01:16:02.097819 | orchestrator | 2026-04-10 01:16:02.097826 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-10 01:16:02.097831 | orchestrator | Friday 10 April 2026 01:15:52 +0000 (0:00:00.230) 0:00:06.968 ********** 2026-04-10 01:16:02.097838 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:16:02.097845 | orchestrator | 2026-04-10 01:16:02.097853 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-10 01:16:02.097860 | orchestrator | Friday 10 April 2026 01:15:52 +0000 (0:00:00.266) 0:00:07.235 ********** 2026-04-10 01:16:02.097866 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:16:02.097872 | orchestrator | 2026-04-10 01:16:02.097889 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-04-10 01:16:02.097894 | orchestrator | Friday 10 April 2026 01:15:53 +0000 (0:00:00.261) 0:00:07.497 ********** 2026-04-10 01:16:02.097899 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:16:02.097904 | orchestrator | 2026-04-10 01:16:02.097908 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-04-10 01:16:02.097913 | orchestrator | Friday 10 April 2026 01:15:53 +0000 (0:00:00.121) 0:00:07.618 ********** 2026-04-10 01:16:02.097918 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:16:02.097922 | orchestrator | 2026-04-10 01:16:02.097927 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-04-10 01:16:02.097930 | orchestrator | Friday 10 April 2026 01:15:55 +0000 (0:00:01.845) 0:00:09.464 ********** 2026-04-10 01:16:02.097934 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:16:02.097938 | orchestrator | 2026-04-10 01:16:02.097942 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-04-10 01:16:02.097952 | orchestrator | Friday 10 April 2026 01:15:55 +0000 (0:00:00.311) 0:00:09.775 ********** 2026-04-10 01:16:02.097956 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:16:02.097960 | orchestrator | 2026-04-10 01:16:02.097964 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-04-10 01:16:02.097968 | orchestrator | Friday 10 April 2026 01:15:55 +0000 (0:00:00.130) 0:00:09.906 ********** 2026-04-10 01:16:02.097971 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:16:02.097975 | orchestrator | 2026-04-10 01:16:02.097979 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-04-10 01:16:02.097987 | orchestrator | Friday 10 April 2026 01:15:55 +0000 (0:00:00.307) 0:00:10.213 ********** 2026-04-10 01:16:02.097991 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:16:02.097995 | orchestrator | 2026-04-10 01:16:02.097999 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-04-10 01:16:02.098002 | orchestrator | Friday 10 April 2026 01:15:56 +0000 (0:00:00.287) 0:00:10.501 ********** 2026-04-10 01:16:02.098006 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:16:02.098010 | orchestrator | 2026-04-10 01:16:02.098070 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-04-10 01:16:02.098078 | orchestrator | Friday 10 April 2026 01:15:56 +0000 (0:00:00.107) 0:00:10.608 ********** 2026-04-10 01:16:02.098086 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:16:02.098091 | orchestrator | 2026-04-10 01:16:02.098098 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-04-10 01:16:02.098104 | orchestrator | Friday 10 April 2026 01:15:56 +0000 (0:00:00.119) 0:00:10.728 ********** 2026-04-10 01:16:02.098110 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:16:02.098117 | orchestrator | 2026-04-10 01:16:02.098123 | orchestrator | TASK [Gather status data] ****************************************************** 2026-04-10 01:16:02.098130 | orchestrator | Friday 10 April 2026 01:15:56 +0000 (0:00:00.276) 0:00:11.005 ********** 2026-04-10 01:16:02.098136 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:16:02.098143 | orchestrator | 2026-04-10 01:16:02.098149 | orchestrator | TASK [Set health test data] **************************************************** 2026-04-10 01:16:02.098153 | orchestrator | Friday 10 April 2026 01:15:57 +0000 (0:00:01.296) 0:00:12.302 ********** 2026-04-10 01:16:02.098156 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:16:02.098160 | orchestrator | 2026-04-10 01:16:02.098164 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-04-10 01:16:02.098168 | orchestrator | Friday 10 April 2026 01:15:58 +0000 (0:00:00.320) 0:00:12.623 ********** 2026-04-10 01:16:02.098172 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:16:02.098176 | orchestrator | 2026-04-10 01:16:02.098180 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-04-10 01:16:02.098183 | orchestrator | Friday 10 April 2026 01:15:58 +0000 (0:00:00.132) 0:00:12.755 ********** 2026-04-10 01:16:02.098187 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:16:02.098191 | orchestrator | 2026-04-10 01:16:02.098195 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-04-10 01:16:02.098198 | orchestrator | Friday 10 April 2026 01:15:58 +0000 (0:00:00.139) 0:00:12.894 ********** 2026-04-10 01:16:02.098202 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:16:02.098206 | orchestrator | 2026-04-10 01:16:02.098210 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-04-10 01:16:02.098214 | orchestrator | Friday 10 April 2026 01:15:58 +0000 (0:00:00.137) 0:00:13.032 ********** 2026-04-10 01:16:02.098217 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:16:02.098221 | orchestrator | 2026-04-10 01:16:02.098225 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-10 01:16:02.098229 | orchestrator | Friday 10 April 2026 01:15:58 +0000 (0:00:00.137) 0:00:13.169 ********** 2026-04-10 01:16:02.098233 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-10 01:16:02.098237 | orchestrator | 2026-04-10 01:16:02.098241 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-10 01:16:02.098253 | orchestrator | Friday 10 April 2026 01:15:59 +0000 (0:00:00.263) 0:00:13.432 ********** 2026-04-10 01:16:02.098257 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:16:02.098260 | orchestrator | 2026-04-10 01:16:02.098264 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-10 01:16:02.098268 | orchestrator | Friday 10 April 2026 01:15:59 +0000 (0:00:00.251) 0:00:13.684 ********** 2026-04-10 01:16:02.098272 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-10 01:16:02.098275 | orchestrator | 2026-04-10 01:16:02.098279 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-10 01:16:02.098283 | orchestrator | Friday 10 April 2026 01:16:01 +0000 (0:00:01.785) 0:00:15.470 ********** 2026-04-10 01:16:02.098287 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-10 01:16:02.098290 | orchestrator | 2026-04-10 01:16:02.098294 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-10 01:16:02.098298 | orchestrator | Friday 10 April 2026 01:16:01 +0000 (0:00:00.267) 0:00:15.738 ********** 2026-04-10 01:16:02.098302 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-10 01:16:02.098306 | orchestrator | 2026-04-10 01:16:02.098314 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-10 01:16:04.355717 | orchestrator | Friday 10 April 2026 01:16:02 +0000 (0:00:00.719) 0:00:16.457 ********** 2026-04-10 01:16:04.355812 | orchestrator | 2026-04-10 01:16:04.355821 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-10 01:16:04.355826 | orchestrator | Friday 10 April 2026 01:16:02 +0000 (0:00:00.069) 0:00:16.527 ********** 2026-04-10 01:16:04.355830 | orchestrator | 2026-04-10 01:16:04.355835 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-10 01:16:04.355839 | orchestrator | Friday 10 April 2026 01:16:02 +0000 (0:00:00.077) 0:00:16.604 ********** 2026-04-10 01:16:04.355843 | orchestrator | 2026-04-10 01:16:04.355847 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-10 01:16:04.355852 | orchestrator | Friday 10 April 2026 01:16:02 +0000 (0:00:00.075) 0:00:16.679 ********** 2026-04-10 01:16:04.355857 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-10 01:16:04.355862 | orchestrator | 2026-04-10 01:16:04.355866 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-10 01:16:04.355870 | orchestrator | Friday 10 April 2026 01:16:03 +0000 (0:00:01.310) 0:00:17.990 ********** 2026-04-10 01:16:04.355874 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-10 01:16:04.355878 | orchestrator |  "msg": [ 2026-04-10 01:16:04.355884 | orchestrator |  "Validator run completed.", 2026-04-10 01:16:04.355888 | orchestrator |  "You can find the report file here:", 2026-04-10 01:16:04.355892 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-04-10T01:15:47+00:00-report.json", 2026-04-10 01:16:04.355897 | orchestrator |  "on the following host:", 2026-04-10 01:16:04.355902 | orchestrator |  "testbed-manager" 2026-04-10 01:16:04.355906 | orchestrator |  ] 2026-04-10 01:16:04.355910 | orchestrator | } 2026-04-10 01:16:04.355914 | orchestrator | 2026-04-10 01:16:04.355919 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 01:16:04.355924 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-10 01:16:04.355929 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 01:16:04.355933 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 01:16:04.355937 | orchestrator | 2026-04-10 01:16:04.355941 | orchestrator | 2026-04-10 01:16:04.355946 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 01:16:04.355977 | orchestrator | Friday 10 April 2026 01:16:04 +0000 (0:00:00.402) 0:00:18.392 ********** 2026-04-10 01:16:04.355986 | orchestrator | =============================================================================== 2026-04-10 01:16:04.355992 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.85s 2026-04-10 01:16:04.355998 | orchestrator | Aggregate test results step one ----------------------------------------- 1.79s 2026-04-10 01:16:04.356004 | orchestrator | Get container info ------------------------------------------------------ 1.53s 2026-04-10 01:16:04.356009 | orchestrator | Write report file ------------------------------------------------------- 1.31s 2026-04-10 01:16:04.356015 | orchestrator | Gather status data ------------------------------------------------------ 1.30s 2026-04-10 01:16:04.356022 | orchestrator | Get timestamp for report file ------------------------------------------- 1.00s 2026-04-10 01:16:04.356041 | orchestrator | Aggregate test results step three --------------------------------------- 0.72s 2026-04-10 01:16:04.356053 | orchestrator | Create report output directory ------------------------------------------ 0.72s 2026-04-10 01:16:04.356059 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.46s 2026-04-10 01:16:04.356064 | orchestrator | Print report file information ------------------------------------------- 0.40s 2026-04-10 01:16:04.356070 | orchestrator | Flush handlers ---------------------------------------------------------- 0.38s 2026-04-10 01:16:04.356076 | orchestrator | Set health test data ---------------------------------------------------- 0.32s 2026-04-10 01:16:04.356082 | orchestrator | Set quorum test data ---------------------------------------------------- 0.31s 2026-04-10 01:16:04.356087 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.31s 2026-04-10 01:16:04.356094 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.31s 2026-04-10 01:16:04.356099 | orchestrator | Set test result to failed if container is missing ----------------------- 0.30s 2026-04-10 01:16:04.356105 | orchestrator | Set test result to passed if container is existing ---------------------- 0.30s 2026-04-10 01:16:04.356111 | orchestrator | Prepare test data ------------------------------------------------------- 0.30s 2026-04-10 01:16:04.356116 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.29s 2026-04-10 01:16:04.356122 | orchestrator | Prepare test data for container existance test -------------------------- 0.28s 2026-04-10 01:16:04.547931 | orchestrator | + osism validate ceph-mgrs 2026-04-10 01:16:33.788905 | orchestrator | 2026-04-10 01:16:33.789014 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-04-10 01:16:33.789023 | orchestrator | 2026-04-10 01:16:33.789028 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-10 01:16:33.789033 | orchestrator | Friday 10 April 2026 01:16:19 +0000 (0:00:00.523) 0:00:00.523 ********** 2026-04-10 01:16:33.789038 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-10 01:16:33.789042 | orchestrator | 2026-04-10 01:16:33.789047 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-10 01:16:33.789051 | orchestrator | Friday 10 April 2026 01:16:20 +0000 (0:00:01.041) 0:00:01.564 ********** 2026-04-10 01:16:33.789056 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-10 01:16:33.789060 | orchestrator | 2026-04-10 01:16:33.789064 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-10 01:16:33.789069 | orchestrator | Friday 10 April 2026 01:16:21 +0000 (0:00:00.705) 0:00:02.270 ********** 2026-04-10 01:16:33.789073 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:16:33.789078 | orchestrator | 2026-04-10 01:16:33.789095 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-10 01:16:33.789099 | orchestrator | Friday 10 April 2026 01:16:21 +0000 (0:00:00.126) 0:00:02.397 ********** 2026-04-10 01:16:33.789103 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:16:33.789107 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:16:33.789111 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:16:33.789115 | orchestrator | 2026-04-10 01:16:33.789133 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-10 01:16:33.789138 | orchestrator | Friday 10 April 2026 01:16:21 +0000 (0:00:00.281) 0:00:02.678 ********** 2026-04-10 01:16:33.789142 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:16:33.789145 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:16:33.789149 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:16:33.789153 | orchestrator | 2026-04-10 01:16:33.789158 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-10 01:16:33.789164 | orchestrator | Friday 10 April 2026 01:16:23 +0000 (0:00:01.430) 0:00:04.109 ********** 2026-04-10 01:16:33.789170 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:16:33.789175 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:16:33.789184 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:16:33.789190 | orchestrator | 2026-04-10 01:16:33.789196 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-10 01:16:33.789201 | orchestrator | Friday 10 April 2026 01:16:23 +0000 (0:00:00.330) 0:00:04.439 ********** 2026-04-10 01:16:33.789207 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:16:33.789212 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:16:33.789218 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:16:33.789225 | orchestrator | 2026-04-10 01:16:33.789231 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-10 01:16:33.789238 | orchestrator | Friday 10 April 2026 01:16:23 +0000 (0:00:00.287) 0:00:04.726 ********** 2026-04-10 01:16:33.789243 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:16:33.789249 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:16:33.789255 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:16:33.789261 | orchestrator | 2026-04-10 01:16:33.789267 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-04-10 01:16:33.789273 | orchestrator | Friday 10 April 2026 01:16:24 +0000 (0:00:00.303) 0:00:05.030 ********** 2026-04-10 01:16:33.789278 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:16:33.789284 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:16:33.789290 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:16:33.789296 | orchestrator | 2026-04-10 01:16:33.789302 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-04-10 01:16:33.789308 | orchestrator | Friday 10 April 2026 01:16:24 +0000 (0:00:00.438) 0:00:05.469 ********** 2026-04-10 01:16:33.789314 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:16:33.789321 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:16:33.789326 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:16:33.789330 | orchestrator | 2026-04-10 01:16:33.789334 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-10 01:16:33.789338 | orchestrator | Friday 10 April 2026 01:16:24 +0000 (0:00:00.273) 0:00:05.742 ********** 2026-04-10 01:16:33.789341 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:16:33.789345 | orchestrator | 2026-04-10 01:16:33.789350 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-10 01:16:33.789354 | orchestrator | Friday 10 April 2026 01:16:25 +0000 (0:00:00.251) 0:00:05.994 ********** 2026-04-10 01:16:33.789357 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:16:33.789361 | orchestrator | 2026-04-10 01:16:33.789366 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-10 01:16:33.789372 | orchestrator | Friday 10 April 2026 01:16:25 +0000 (0:00:00.264) 0:00:06.258 ********** 2026-04-10 01:16:33.789378 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:16:33.789385 | orchestrator | 2026-04-10 01:16:33.789394 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-10 01:16:33.789401 | orchestrator | Friday 10 April 2026 01:16:25 +0000 (0:00:00.264) 0:00:06.523 ********** 2026-04-10 01:16:33.789407 | orchestrator | 2026-04-10 01:16:33.789436 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-10 01:16:33.789442 | orchestrator | Friday 10 April 2026 01:16:25 +0000 (0:00:00.078) 0:00:06.601 ********** 2026-04-10 01:16:33.789448 | orchestrator | 2026-04-10 01:16:33.789464 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-10 01:16:33.789471 | orchestrator | Friday 10 April 2026 01:16:25 +0000 (0:00:00.071) 0:00:06.672 ********** 2026-04-10 01:16:33.789477 | orchestrator | 2026-04-10 01:16:33.789483 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-10 01:16:33.789491 | orchestrator | Friday 10 April 2026 01:16:25 +0000 (0:00:00.231) 0:00:06.904 ********** 2026-04-10 01:16:33.789497 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:16:33.789504 | orchestrator | 2026-04-10 01:16:33.789511 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-10 01:16:33.789515 | orchestrator | Friday 10 April 2026 01:16:26 +0000 (0:00:00.256) 0:00:07.160 ********** 2026-04-10 01:16:33.789520 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:16:33.789524 | orchestrator | 2026-04-10 01:16:33.789543 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-04-10 01:16:33.789548 | orchestrator | Friday 10 April 2026 01:16:26 +0000 (0:00:00.251) 0:00:07.411 ********** 2026-04-10 01:16:33.789553 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:16:33.789557 | orchestrator | 2026-04-10 01:16:33.789562 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-04-10 01:16:33.789566 | orchestrator | Friday 10 April 2026 01:16:26 +0000 (0:00:00.126) 0:00:07.538 ********** 2026-04-10 01:16:33.789570 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:16:33.789575 | orchestrator | 2026-04-10 01:16:33.789579 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-04-10 01:16:33.789584 | orchestrator | Friday 10 April 2026 01:16:28 +0000 (0:00:01.713) 0:00:09.251 ********** 2026-04-10 01:16:33.789588 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:16:33.789592 | orchestrator | 2026-04-10 01:16:33.789597 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-04-10 01:16:33.789601 | orchestrator | Friday 10 April 2026 01:16:28 +0000 (0:00:00.260) 0:00:09.512 ********** 2026-04-10 01:16:33.789606 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:16:33.789610 | orchestrator | 2026-04-10 01:16:33.789614 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-04-10 01:16:33.789618 | orchestrator | Friday 10 April 2026 01:16:28 +0000 (0:00:00.299) 0:00:09.812 ********** 2026-04-10 01:16:33.789622 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:16:33.789627 | orchestrator | 2026-04-10 01:16:33.789631 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-04-10 01:16:33.789635 | orchestrator | Friday 10 April 2026 01:16:28 +0000 (0:00:00.138) 0:00:09.951 ********** 2026-04-10 01:16:33.789640 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:16:33.789644 | orchestrator | 2026-04-10 01:16:33.789648 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-10 01:16:33.789652 | orchestrator | Friday 10 April 2026 01:16:29 +0000 (0:00:00.157) 0:00:10.108 ********** 2026-04-10 01:16:33.789657 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-10 01:16:33.789661 | orchestrator | 2026-04-10 01:16:33.789665 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-10 01:16:33.789674 | orchestrator | Friday 10 April 2026 01:16:29 +0000 (0:00:00.284) 0:00:10.392 ********** 2026-04-10 01:16:33.789679 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:16:33.789683 | orchestrator | 2026-04-10 01:16:33.789688 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-10 01:16:33.789692 | orchestrator | Friday 10 April 2026 01:16:29 +0000 (0:00:00.243) 0:00:10.635 ********** 2026-04-10 01:16:33.789696 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-10 01:16:33.789701 | orchestrator | 2026-04-10 01:16:33.789705 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-10 01:16:33.789709 | orchestrator | Friday 10 April 2026 01:16:31 +0000 (0:00:01.572) 0:00:12.208 ********** 2026-04-10 01:16:33.789714 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-10 01:16:33.789723 | orchestrator | 2026-04-10 01:16:33.789727 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-10 01:16:33.789731 | orchestrator | Friday 10 April 2026 01:16:31 +0000 (0:00:00.288) 0:00:12.497 ********** 2026-04-10 01:16:33.789736 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-10 01:16:33.789740 | orchestrator | 2026-04-10 01:16:33.789745 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-10 01:16:33.789749 | orchestrator | Friday 10 April 2026 01:16:31 +0000 (0:00:00.273) 0:00:12.771 ********** 2026-04-10 01:16:33.789753 | orchestrator | 2026-04-10 01:16:33.789757 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-10 01:16:33.789762 | orchestrator | Friday 10 April 2026 01:16:31 +0000 (0:00:00.072) 0:00:12.843 ********** 2026-04-10 01:16:33.789766 | orchestrator | 2026-04-10 01:16:33.789770 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-10 01:16:33.789775 | orchestrator | Friday 10 April 2026 01:16:31 +0000 (0:00:00.069) 0:00:12.913 ********** 2026-04-10 01:16:33.789779 | orchestrator | 2026-04-10 01:16:33.789784 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-10 01:16:33.789788 | orchestrator | Friday 10 April 2026 01:16:32 +0000 (0:00:00.072) 0:00:12.986 ********** 2026-04-10 01:16:33.789792 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-10 01:16:33.789796 | orchestrator | 2026-04-10 01:16:33.789801 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-10 01:16:33.789805 | orchestrator | Friday 10 April 2026 01:16:33 +0000 (0:00:01.318) 0:00:14.305 ********** 2026-04-10 01:16:33.789810 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-10 01:16:33.789815 | orchestrator |  "msg": [ 2026-04-10 01:16:33.789820 | orchestrator |  "Validator run completed.", 2026-04-10 01:16:33.789824 | orchestrator |  "You can find the report file here:", 2026-04-10 01:16:33.789829 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-04-10T01:16:20+00:00-report.json", 2026-04-10 01:16:33.789834 | orchestrator |  "on the following host:", 2026-04-10 01:16:33.789838 | orchestrator |  "testbed-manager" 2026-04-10 01:16:33.789842 | orchestrator |  ] 2026-04-10 01:16:33.789846 | orchestrator | } 2026-04-10 01:16:33.789850 | orchestrator | 2026-04-10 01:16:33.789854 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 01:16:33.789861 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-10 01:16:33.789868 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 01:16:33.789879 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 01:16:34.136590 | orchestrator | 2026-04-10 01:16:34.136681 | orchestrator | 2026-04-10 01:16:34.136689 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 01:16:34.136696 | orchestrator | Friday 10 April 2026 01:16:33 +0000 (0:00:00.437) 0:00:14.742 ********** 2026-04-10 01:16:34.136700 | orchestrator | =============================================================================== 2026-04-10 01:16:34.136705 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.71s 2026-04-10 01:16:34.136709 | orchestrator | Aggregate test results step one ----------------------------------------- 1.57s 2026-04-10 01:16:34.136713 | orchestrator | Get container info ------------------------------------------------------ 1.43s 2026-04-10 01:16:34.136717 | orchestrator | Write report file ------------------------------------------------------- 1.32s 2026-04-10 01:16:34.136721 | orchestrator | Get timestamp for report file ------------------------------------------- 1.04s 2026-04-10 01:16:34.136725 | orchestrator | Create report output directory ------------------------------------------ 0.71s 2026-04-10 01:16:34.136747 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.44s 2026-04-10 01:16:34.136751 | orchestrator | Print report file information ------------------------------------------- 0.44s 2026-04-10 01:16:34.136755 | orchestrator | Flush handlers ---------------------------------------------------------- 0.38s 2026-04-10 01:16:34.136758 | orchestrator | Set test result to failed if container is missing ----------------------- 0.33s 2026-04-10 01:16:34.136762 | orchestrator | Prepare test data ------------------------------------------------------- 0.30s 2026-04-10 01:16:34.136766 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.30s 2026-04-10 01:16:34.136770 | orchestrator | Aggregate test results step two ----------------------------------------- 0.29s 2026-04-10 01:16:34.136774 | orchestrator | Set test result to passed if container is existing ---------------------- 0.29s 2026-04-10 01:16:34.136778 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.28s 2026-04-10 01:16:34.136782 | orchestrator | Prepare test data for container existance test -------------------------- 0.28s 2026-04-10 01:16:34.136786 | orchestrator | Aggregate test results step three --------------------------------------- 0.27s 2026-04-10 01:16:34.136790 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.27s 2026-04-10 01:16:34.136794 | orchestrator | Aggregate test results step three --------------------------------------- 0.26s 2026-04-10 01:16:34.136798 | orchestrator | Aggregate test results step two ----------------------------------------- 0.26s 2026-04-10 01:16:34.322925 | orchestrator | + osism validate ceph-osds 2026-04-10 01:16:53.415956 | orchestrator | 2026-04-10 01:16:53.416040 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-04-10 01:16:53.416047 | orchestrator | 2026-04-10 01:16:53.416063 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-10 01:16:53.416070 | orchestrator | Friday 10 April 2026 01:16:49 +0000 (0:00:00.499) 0:00:00.499 ********** 2026-04-10 01:16:53.416078 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-10 01:16:53.416085 | orchestrator | 2026-04-10 01:16:53.416091 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-10 01:16:53.416097 | orchestrator | Friday 10 April 2026 01:16:50 +0000 (0:00:00.999) 0:00:01.499 ********** 2026-04-10 01:16:53.416103 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-10 01:16:53.416109 | orchestrator | 2026-04-10 01:16:53.416117 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-10 01:16:53.416123 | orchestrator | Friday 10 April 2026 01:16:50 +0000 (0:00:00.259) 0:00:01.759 ********** 2026-04-10 01:16:53.416130 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-10 01:16:53.416136 | orchestrator | 2026-04-10 01:16:53.416155 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-10 01:16:53.416162 | orchestrator | Friday 10 April 2026 01:16:51 +0000 (0:00:00.712) 0:00:02.471 ********** 2026-04-10 01:16:53.416175 | orchestrator | ok: [testbed-node-3] 2026-04-10 01:16:53.416183 | orchestrator | 2026-04-10 01:16:53.416190 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-10 01:16:53.416196 | orchestrator | Friday 10 April 2026 01:16:51 +0000 (0:00:00.121) 0:00:02.593 ********** 2026-04-10 01:16:53.416203 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:16:53.416209 | orchestrator | 2026-04-10 01:16:53.416215 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-10 01:16:53.416222 | orchestrator | Friday 10 April 2026 01:16:51 +0000 (0:00:00.121) 0:00:02.715 ********** 2026-04-10 01:16:53.416229 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:16:53.416235 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:16:53.416242 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:16:53.416247 | orchestrator | 2026-04-10 01:16:53.416253 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-10 01:16:53.416260 | orchestrator | Friday 10 April 2026 01:16:52 +0000 (0:00:00.443) 0:00:03.158 ********** 2026-04-10 01:16:53.416268 | orchestrator | ok: [testbed-node-3] 2026-04-10 01:16:53.416325 | orchestrator | 2026-04-10 01:16:53.416333 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-10 01:16:53.416341 | orchestrator | Friday 10 April 2026 01:16:52 +0000 (0:00:00.163) 0:00:03.321 ********** 2026-04-10 01:16:53.416345 | orchestrator | ok: [testbed-node-3] 2026-04-10 01:16:53.416352 | orchestrator | ok: [testbed-node-4] 2026-04-10 01:16:53.416358 | orchestrator | ok: [testbed-node-5] 2026-04-10 01:16:53.416364 | orchestrator | 2026-04-10 01:16:53.416371 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-04-10 01:16:53.416392 | orchestrator | Friday 10 April 2026 01:16:52 +0000 (0:00:00.305) 0:00:03.627 ********** 2026-04-10 01:16:53.416399 | orchestrator | ok: [testbed-node-3] 2026-04-10 01:16:53.416508 | orchestrator | 2026-04-10 01:16:53.416520 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-10 01:16:53.416527 | orchestrator | Friday 10 April 2026 01:16:52 +0000 (0:00:00.332) 0:00:03.959 ********** 2026-04-10 01:16:53.416534 | orchestrator | ok: [testbed-node-3] 2026-04-10 01:16:53.416541 | orchestrator | ok: [testbed-node-4] 2026-04-10 01:16:53.416547 | orchestrator | ok: [testbed-node-5] 2026-04-10 01:16:53.416553 | orchestrator | 2026-04-10 01:16:53.416560 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-04-10 01:16:53.416567 | orchestrator | Friday 10 April 2026 01:16:53 +0000 (0:00:00.294) 0:00:04.254 ********** 2026-04-10 01:16:53.416592 | orchestrator | skipping: [testbed-node-3] => (item={'id': '209a573e5a067da1f2fc9c3103030bb318d8c777dcc3ecd9cf01226f004bc7eb', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-10 01:16:53.416603 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'cc2213ac853e0c40481aba9485f806cede635dfff7f746f3fda1d0442c79c102', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-10 01:16:53.416613 | orchestrator | skipping: [testbed-node-3] => (item={'id': '11f56aa14554193b30a98a3b6c92f9ffe6f17f32cee9d74b15aabd1e07f9834c', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-10 01:16:53.416621 | orchestrator | skipping: [testbed-node-3] => (item={'id': '06d7b9e5968291c52ef9c88abf1f992b4e30622c95818eb91708a181ad7e4b63', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-10 01:16:53.416641 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'cad3ea925d3f5294f25fadca2de6bcc1f32451fc1267617ef958e2f5983551c6', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-10 01:16:53.416666 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f7471308e9295443b9498bc7a93ccd06f9ada7401f2429d38365805f1df249de', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-10 01:16:53.416673 | orchestrator | skipping: [testbed-node-3] => (item={'id': '78f4c724937ce5d588a003873c69aeaf296d461450b9529cd1628e25db073f07', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-10 01:16:53.416680 | orchestrator | skipping: [testbed-node-3] => (item={'id': '32451e0adbe8250258e2ee737d1201b0a3f02c512beeb04e4aa3f4a5dd4ab2fb', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2026-04-10 01:16:53.416687 | orchestrator | skipping: [testbed-node-3] => (item={'id': '21ef0b08ea35ba1c9f87f1dcac17547cbc17b09f02a35321b7b87bced362ac69', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 21 minutes'})  2026-04-10 01:16:53.416702 | orchestrator | skipping: [testbed-node-3] => (item={'id': '57c81b22098dafe85018387e262e861404499685b13374d0df72695a493e28b6', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2026-04-10 01:16:53.416709 | orchestrator | ok: [testbed-node-3] => (item={'id': 'c8e5098ba9a6e9cbc0259d16112ed688d3b444f65e438d42719b4ff514dc9acd', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-04-10 01:16:53.416716 | orchestrator | ok: [testbed-node-3] => (item={'id': '27051575d2a5553dcc4a711b6f2d51e615ec9188d09e346bb082ca8ac2a3dea9', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-04-10 01:16:53.416723 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'cb96bdb492737cba604a0062bdee2d2b5d68bbbc536943ecb3a6958412041216', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2026-04-10 01:16:53.416730 | orchestrator | skipping: [testbed-node-3] => (item={'id': '54d22ca0b3455cfe95242ffce08c458aab17a2b76944ab88f497c744bb6cd184', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-04-10 01:16:53.416737 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'de7f672c303b127e46508bf430be3bf3f65f491fece6f4820c20c05ae2cc90ac', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-04-10 01:16:53.416744 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2f0db2de5fe55ba70d87d98693bc297fe66de8ea62f0d0bae7c5c61a221bd734', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-10 01:16:53.416750 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8c0edbf9b203545c1e6608d38fb0d892ffabf3778f7352c8a84177827a8c62a8', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-10 01:16:53.416757 | orchestrator | skipping: [testbed-node-3] => (item={'id': '165e9a85e3a75b8ebddb8c7f6cd240bf16110b9cc01b0eb02386597244cc29c3', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2026-04-10 01:16:53.416765 | orchestrator | skipping: [testbed-node-4] => (item={'id': '65dc902aaa7816d5245ed5d488c97b221a3a5e394e9c90ceab4005e266f25afa', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-10 01:16:53.416770 | orchestrator | skipping: [testbed-node-4] => (item={'id': '746f7226e65025f342669223495f4b5995a5564e1175fb158aceb65ba9e5c90a', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-10 01:16:53.416781 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7b8234ab152fda5fa97f07bea6be2cb9f843f984b5de7a41e440380f024515ed', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-10 01:16:53.416793 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f20909e5cfe6f23faa40d4fe6fef8865e01a55f0251a84f4ec7f5122d198ec2f', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-10 01:16:53.577931 | orchestrator | skipping: [testbed-node-4] => (item={'id': '218974d553ea2e6dad968f3e78a6f7f5bea5b132e7274fe21d18e94d3a2be34f', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-10 01:16:53.578081 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3fe090059087bde7ad468875b73d9417346f70c71d45d8750470e07d2034a993', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-10 01:16:53.578094 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2b12ebb2f8f862fcb5f8061a9e963f5ec69a6acabbb06f07449943f527bab996', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-10 01:16:53.578100 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fccc981f7ec528ea34e2e75ca2708bd4f470d90bae61ae587ab7614878ffccb8', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2026-04-10 01:16:53.578105 | orchestrator | skipping: [testbed-node-4] => (item={'id': '15642a8b491f7f578709c5cb98b5c3c33988012240e65c4a289f82cd34530964', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 21 minutes'})  2026-04-10 01:16:53.578109 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a4e4e9ffc923c767e0ad337e3868467140ec26d992afb63f783b76d401712798', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2026-04-10 01:16:53.578115 | orchestrator | ok: [testbed-node-4] => (item={'id': 'e7f752780d2d46ff54a749afaecc99a120824550997ef41b55b9ee877229289d', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-04-10 01:16:53.578120 | orchestrator | ok: [testbed-node-4] => (item={'id': '57707f0b19128f3aa12eebfa6fd1ed12c37699d48f90d874dd63f59788bc2f92', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-04-10 01:16:53.578124 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c90705c63d1c1e0429b5d9c0e131a2a1cf1aa98c547824e8069ca7d2cd8c31e5', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2026-04-10 01:16:53.578129 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ffbb73db6316e089d89fa529232365adf44f378bff352d956db3fc9e3e7a71ac', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-04-10 01:16:53.578133 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'cfd996d3c8fe1b49d93bd0e67325d71f35091cae6ac4c48156a789b1a8de3bd0', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-04-10 01:16:53.578137 | orchestrator | skipping: [testbed-node-4] => (item={'id': '32c1903bc49ec267bf55769559ecc9c63d104b3d0a2fe5eb2afc1e109c700557', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-10 01:16:53.578142 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f423a2e783f59a8cfb57fcf9d23133f34f1707112f76ce62771de938527ae27a', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-10 01:16:53.578146 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b1e88844e081008e4c9db1655470b525514c102689f6f2233ece27eb73f4d0f2', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2026-04-10 01:16:53.578150 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ab0a6cc1b18bf07db6fc631ffd572f53a0171153a531f2fd734b1a6e63e27373', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-10 01:16:53.578174 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd87f6a4f21d8b4d3659315913553903d37dee5143e2c15fbfab3496de3ce486f', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-10 01:16:53.578179 | orchestrator | skipping: [testbed-node-5] => (item={'id': '76145cf47ccbb5332d35478dd4f8b87e421be923707bdb78008a50fce864cc52', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-10 01:16:53.578183 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0db3c3a53d953d91b1b94552dfb2bd908dcc4cb86e8097df3a772589cd9b7672', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-10 01:16:53.578188 | orchestrator | skipping: [testbed-node-5] => (item={'id': '12345044b2c68d2b389749507411b4021bdf4d99f5c6d97a1f8d2d19db151412', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-10 01:16:53.578192 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'afae9b2c00850a53737e8da2fda61662ec453999d678a7dc62cf978ba16c076b', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-10 01:16:53.578196 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd49cdcae82c9ca9a563f07690c73f95395a885464c8deb8a7bd4dd46c4ccd58c', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-04-10 01:16:53.578200 | orchestrator | skipping: [testbed-node-5] => (item={'id': '577cccaa32dd813724802b79a5c008ed87fc767686ba766904a9229a0640a043', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2026-04-10 01:16:53.578204 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c528ca991dadc8fc4f371f5af1f4a90f7a996b6908a7fc8c657aff4f68a52a24', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 21 minutes'})  2026-04-10 01:16:53.578208 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9e1eaa73160e16aa500d5a3afe25b5c919b26c6a251c8a9b35602c1462fff2d8', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2026-04-10 01:16:53.578223 | orchestrator | ok: [testbed-node-5] => (item={'id': '93ffe20a3cd4465cf1915707155e1f15e172751875ad4fe504ea3fe183dec83d', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-04-10 01:16:53.578227 | orchestrator | ok: [testbed-node-5] => (item={'id': 'b33b3c42653c3b891ef2ed80f9266a581a1bc8ed20bcdfb7e7e28fe12b3ee0a5', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 23 minutes'}) 2026-04-10 01:16:53.578231 | orchestrator | skipping: [testbed-node-5] => (item={'id': '49c70f18a52a92bdf3650fbf6df1d8b34c916288d96e1254e0eb844f79424584', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2026-04-10 01:16:53.578235 | orchestrator | skipping: [testbed-node-5] => (item={'id': '078458bfdcdb7654e2a3e72a57beb47cd9e8e6596e65b295df7a9baa62f1e0b3', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-04-10 01:16:53.578239 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ad6e0d0b4624de5b6da1b4a97813736801ff5ae2aa0cd65a5abd4688a7827ab8', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-04-10 01:16:53.578249 | orchestrator | skipping: [testbed-node-5] => (item={'id': '889551cd916ad2e82f31a5112759c3771648e719dac3c99377435b772697f849', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-10 01:16:53.578253 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cb3beb47e0b46abb3fd981d6e1dbd734f81ca020342cf2205e0623252c34eeab', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-10 01:16:53.578261 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ac43fc659e427486d409bb1c6bdd0f62827e06a455344bf8fbee42d4acfa2b1e', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2026-04-10 01:17:06.462904 | orchestrator | 2026-04-10 01:17:06.463032 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-04-10 01:17:06.463046 | orchestrator | Friday 10 April 2026 01:16:53 +0000 (0:00:00.665) 0:00:04.920 ********** 2026-04-10 01:17:06.463054 | orchestrator | ok: [testbed-node-3] 2026-04-10 01:17:06.463062 | orchestrator | ok: [testbed-node-4] 2026-04-10 01:17:06.463068 | orchestrator | ok: [testbed-node-5] 2026-04-10 01:17:06.463075 | orchestrator | 2026-04-10 01:17:06.463081 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-04-10 01:17:06.463088 | orchestrator | Friday 10 April 2026 01:16:54 +0000 (0:00:00.306) 0:00:05.226 ********** 2026-04-10 01:17:06.463095 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:17:06.463101 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:17:06.463105 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:17:06.463109 | orchestrator | 2026-04-10 01:17:06.463113 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-04-10 01:17:06.463117 | orchestrator | Friday 10 April 2026 01:16:54 +0000 (0:00:00.288) 0:00:05.515 ********** 2026-04-10 01:17:06.463121 | orchestrator | ok: [testbed-node-3] 2026-04-10 01:17:06.463125 | orchestrator | ok: [testbed-node-4] 2026-04-10 01:17:06.463129 | orchestrator | ok: [testbed-node-5] 2026-04-10 01:17:06.463133 | orchestrator | 2026-04-10 01:17:06.463137 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-10 01:17:06.463141 | orchestrator | Friday 10 April 2026 01:16:54 +0000 (0:00:00.307) 0:00:05.822 ********** 2026-04-10 01:17:06.463145 | orchestrator | ok: [testbed-node-3] 2026-04-10 01:17:06.463149 | orchestrator | ok: [testbed-node-4] 2026-04-10 01:17:06.463153 | orchestrator | ok: [testbed-node-5] 2026-04-10 01:17:06.463157 | orchestrator | 2026-04-10 01:17:06.463161 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-04-10 01:17:06.463165 | orchestrator | Friday 10 April 2026 01:16:55 +0000 (0:00:00.497) 0:00:06.319 ********** 2026-04-10 01:17:06.463169 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-04-10 01:17:06.463174 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-04-10 01:17:06.463178 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:17:06.463182 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-04-10 01:17:06.463186 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-04-10 01:17:06.463190 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:17:06.463194 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-04-10 01:17:06.463198 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-04-10 01:17:06.463201 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:17:06.463205 | orchestrator | 2026-04-10 01:17:06.463209 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-04-10 01:17:06.463213 | orchestrator | Friday 10 April 2026 01:16:55 +0000 (0:00:00.326) 0:00:06.645 ********** 2026-04-10 01:17:06.463234 | orchestrator | ok: [testbed-node-3] 2026-04-10 01:17:06.463238 | orchestrator | ok: [testbed-node-4] 2026-04-10 01:17:06.463242 | orchestrator | ok: [testbed-node-5] 2026-04-10 01:17:06.463245 | orchestrator | 2026-04-10 01:17:06.463249 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-10 01:17:06.463253 | orchestrator | Friday 10 April 2026 01:16:55 +0000 (0:00:00.292) 0:00:06.938 ********** 2026-04-10 01:17:06.463257 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:17:06.463265 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:17:06.463271 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:17:06.463280 | orchestrator | 2026-04-10 01:17:06.463289 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-10 01:17:06.463295 | orchestrator | Friday 10 April 2026 01:16:56 +0000 (0:00:00.297) 0:00:07.236 ********** 2026-04-10 01:17:06.463300 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:17:06.463306 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:17:06.463311 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:17:06.463316 | orchestrator | 2026-04-10 01:17:06.463323 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-04-10 01:17:06.463328 | orchestrator | Friday 10 April 2026 01:16:56 +0000 (0:00:00.480) 0:00:07.716 ********** 2026-04-10 01:17:06.463334 | orchestrator | ok: [testbed-node-3] 2026-04-10 01:17:06.463339 | orchestrator | ok: [testbed-node-4] 2026-04-10 01:17:06.463344 | orchestrator | ok: [testbed-node-5] 2026-04-10 01:17:06.463349 | orchestrator | 2026-04-10 01:17:06.463355 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-10 01:17:06.463360 | orchestrator | Friday 10 April 2026 01:16:56 +0000 (0:00:00.310) 0:00:08.026 ********** 2026-04-10 01:17:06.463365 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:17:06.463371 | orchestrator | 2026-04-10 01:17:06.463377 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-10 01:17:06.463396 | orchestrator | Friday 10 April 2026 01:16:57 +0000 (0:00:00.264) 0:00:08.291 ********** 2026-04-10 01:17:06.463448 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:17:06.463454 | orchestrator | 2026-04-10 01:17:06.463459 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-10 01:17:06.463465 | orchestrator | Friday 10 April 2026 01:16:57 +0000 (0:00:00.248) 0:00:08.539 ********** 2026-04-10 01:17:06.463471 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:17:06.463476 | orchestrator | 2026-04-10 01:17:06.463482 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-10 01:17:06.463488 | orchestrator | Friday 10 April 2026 01:16:57 +0000 (0:00:00.245) 0:00:08.785 ********** 2026-04-10 01:17:06.463494 | orchestrator | 2026-04-10 01:17:06.463499 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-10 01:17:06.463506 | orchestrator | Friday 10 April 2026 01:16:57 +0000 (0:00:00.065) 0:00:08.851 ********** 2026-04-10 01:17:06.463512 | orchestrator | 2026-04-10 01:17:06.463518 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-10 01:17:06.463541 | orchestrator | Friday 10 April 2026 01:16:57 +0000 (0:00:00.068) 0:00:08.919 ********** 2026-04-10 01:17:06.463548 | orchestrator | 2026-04-10 01:17:06.463553 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-10 01:17:06.463559 | orchestrator | Friday 10 April 2026 01:16:57 +0000 (0:00:00.068) 0:00:08.987 ********** 2026-04-10 01:17:06.463565 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:17:06.463571 | orchestrator | 2026-04-10 01:17:06.463577 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-04-10 01:17:06.463584 | orchestrator | Friday 10 April 2026 01:16:58 +0000 (0:00:00.658) 0:00:09.646 ********** 2026-04-10 01:17:06.463590 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:17:06.463597 | orchestrator | 2026-04-10 01:17:06.463604 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-10 01:17:06.463610 | orchestrator | Friday 10 April 2026 01:16:58 +0000 (0:00:00.242) 0:00:09.889 ********** 2026-04-10 01:17:06.463625 | orchestrator | ok: [testbed-node-3] 2026-04-10 01:17:06.463629 | orchestrator | ok: [testbed-node-4] 2026-04-10 01:17:06.463634 | orchestrator | ok: [testbed-node-5] 2026-04-10 01:17:06.463638 | orchestrator | 2026-04-10 01:17:06.463643 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-04-10 01:17:06.463647 | orchestrator | Friday 10 April 2026 01:16:59 +0000 (0:00:00.290) 0:00:10.180 ********** 2026-04-10 01:17:06.463652 | orchestrator | ok: [testbed-node-3] 2026-04-10 01:17:06.463656 | orchestrator | 2026-04-10 01:17:06.463661 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-04-10 01:17:06.463665 | orchestrator | Friday 10 April 2026 01:16:59 +0000 (0:00:00.270) 0:00:10.450 ********** 2026-04-10 01:17:06.463670 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-10 01:17:06.463675 | orchestrator | 2026-04-10 01:17:06.463679 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-04-10 01:17:06.463683 | orchestrator | Friday 10 April 2026 01:17:01 +0000 (0:00:01.955) 0:00:12.406 ********** 2026-04-10 01:17:06.463687 | orchestrator | ok: [testbed-node-3] 2026-04-10 01:17:06.463691 | orchestrator | 2026-04-10 01:17:06.463695 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-04-10 01:17:06.463698 | orchestrator | Friday 10 April 2026 01:17:01 +0000 (0:00:00.116) 0:00:12.523 ********** 2026-04-10 01:17:06.463702 | orchestrator | ok: [testbed-node-3] 2026-04-10 01:17:06.463706 | orchestrator | 2026-04-10 01:17:06.463710 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-04-10 01:17:06.463713 | orchestrator | Friday 10 April 2026 01:17:01 +0000 (0:00:00.289) 0:00:12.812 ********** 2026-04-10 01:17:06.463717 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:17:06.463721 | orchestrator | 2026-04-10 01:17:06.463725 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-04-10 01:17:06.463728 | orchestrator | Friday 10 April 2026 01:17:01 +0000 (0:00:00.117) 0:00:12.930 ********** 2026-04-10 01:17:06.463733 | orchestrator | ok: [testbed-node-3] 2026-04-10 01:17:06.463739 | orchestrator | 2026-04-10 01:17:06.463745 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-10 01:17:06.463751 | orchestrator | Friday 10 April 2026 01:17:01 +0000 (0:00:00.127) 0:00:13.057 ********** 2026-04-10 01:17:06.463757 | orchestrator | ok: [testbed-node-3] 2026-04-10 01:17:06.463763 | orchestrator | ok: [testbed-node-4] 2026-04-10 01:17:06.463770 | orchestrator | ok: [testbed-node-5] 2026-04-10 01:17:06.463776 | orchestrator | 2026-04-10 01:17:06.463783 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-04-10 01:17:06.463788 | orchestrator | Friday 10 April 2026 01:17:02 +0000 (0:00:00.446) 0:00:13.503 ********** 2026-04-10 01:17:06.463791 | orchestrator | changed: [testbed-node-3] 2026-04-10 01:17:06.463795 | orchestrator | changed: [testbed-node-4] 2026-04-10 01:17:06.463799 | orchestrator | changed: [testbed-node-5] 2026-04-10 01:17:06.463803 | orchestrator | 2026-04-10 01:17:06.463807 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-04-10 01:17:06.463811 | orchestrator | Friday 10 April 2026 01:17:04 +0000 (0:00:01.751) 0:00:15.255 ********** 2026-04-10 01:17:06.463815 | orchestrator | ok: [testbed-node-3] 2026-04-10 01:17:06.463818 | orchestrator | ok: [testbed-node-4] 2026-04-10 01:17:06.463822 | orchestrator | ok: [testbed-node-5] 2026-04-10 01:17:06.463826 | orchestrator | 2026-04-10 01:17:06.463830 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-04-10 01:17:06.463834 | orchestrator | Friday 10 April 2026 01:17:04 +0000 (0:00:00.296) 0:00:15.551 ********** 2026-04-10 01:17:06.463838 | orchestrator | ok: [testbed-node-3] 2026-04-10 01:17:06.463841 | orchestrator | ok: [testbed-node-4] 2026-04-10 01:17:06.463845 | orchestrator | ok: [testbed-node-5] 2026-04-10 01:17:06.463849 | orchestrator | 2026-04-10 01:17:06.463853 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-04-10 01:17:06.463857 | orchestrator | Friday 10 April 2026 01:17:04 +0000 (0:00:00.452) 0:00:16.004 ********** 2026-04-10 01:17:06.463865 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:17:06.463869 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:17:06.463873 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:17:06.463876 | orchestrator | 2026-04-10 01:17:06.463880 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-04-10 01:17:06.463889 | orchestrator | Friday 10 April 2026 01:17:05 +0000 (0:00:00.483) 0:00:16.488 ********** 2026-04-10 01:17:06.463893 | orchestrator | ok: [testbed-node-3] 2026-04-10 01:17:06.463897 | orchestrator | ok: [testbed-node-4] 2026-04-10 01:17:06.463901 | orchestrator | ok: [testbed-node-5] 2026-04-10 01:17:06.463905 | orchestrator | 2026-04-10 01:17:06.463909 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-04-10 01:17:06.463913 | orchestrator | Friday 10 April 2026 01:17:05 +0000 (0:00:00.295) 0:00:16.783 ********** 2026-04-10 01:17:06.463916 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:17:06.463920 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:17:06.463924 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:17:06.463928 | orchestrator | 2026-04-10 01:17:06.463932 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-04-10 01:17:06.463936 | orchestrator | Friday 10 April 2026 01:17:05 +0000 (0:00:00.286) 0:00:17.069 ********** 2026-04-10 01:17:06.463940 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:17:06.463944 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:17:06.463948 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:17:06.463952 | orchestrator | 2026-04-10 01:17:06.463960 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-10 01:17:13.781072 | orchestrator | Friday 10 April 2026 01:17:06 +0000 (0:00:00.476) 0:00:17.545 ********** 2026-04-10 01:17:13.781201 | orchestrator | ok: [testbed-node-3] 2026-04-10 01:17:13.781217 | orchestrator | ok: [testbed-node-4] 2026-04-10 01:17:13.781226 | orchestrator | ok: [testbed-node-5] 2026-04-10 01:17:13.781234 | orchestrator | 2026-04-10 01:17:13.781244 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-04-10 01:17:13.781252 | orchestrator | Friday 10 April 2026 01:17:06 +0000 (0:00:00.490) 0:00:18.036 ********** 2026-04-10 01:17:13.781260 | orchestrator | ok: [testbed-node-3] 2026-04-10 01:17:13.781267 | orchestrator | ok: [testbed-node-4] 2026-04-10 01:17:13.781274 | orchestrator | ok: [testbed-node-5] 2026-04-10 01:17:13.781281 | orchestrator | 2026-04-10 01:17:13.781289 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-04-10 01:17:13.781297 | orchestrator | Friday 10 April 2026 01:17:07 +0000 (0:00:00.486) 0:00:18.522 ********** 2026-04-10 01:17:13.781304 | orchestrator | ok: [testbed-node-3] 2026-04-10 01:17:13.781312 | orchestrator | ok: [testbed-node-4] 2026-04-10 01:17:13.781320 | orchestrator | ok: [testbed-node-5] 2026-04-10 01:17:13.781327 | orchestrator | 2026-04-10 01:17:13.781335 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-04-10 01:17:13.781343 | orchestrator | Friday 10 April 2026 01:17:07 +0000 (0:00:00.310) 0:00:18.832 ********** 2026-04-10 01:17:13.781350 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:17:13.781359 | orchestrator | skipping: [testbed-node-4] 2026-04-10 01:17:13.781368 | orchestrator | skipping: [testbed-node-5] 2026-04-10 01:17:13.781376 | orchestrator | 2026-04-10 01:17:13.781384 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-04-10 01:17:13.781391 | orchestrator | Friday 10 April 2026 01:17:08 +0000 (0:00:00.492) 0:00:19.324 ********** 2026-04-10 01:17:13.781453 | orchestrator | ok: [testbed-node-3] 2026-04-10 01:17:13.781462 | orchestrator | ok: [testbed-node-4] 2026-04-10 01:17:13.781470 | orchestrator | ok: [testbed-node-5] 2026-04-10 01:17:13.781478 | orchestrator | 2026-04-10 01:17:13.781486 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-10 01:17:13.781494 | orchestrator | Friday 10 April 2026 01:17:08 +0000 (0:00:00.315) 0:00:19.640 ********** 2026-04-10 01:17:13.781502 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-10 01:17:13.781533 | orchestrator | 2026-04-10 01:17:13.781543 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-10 01:17:13.781551 | orchestrator | Friday 10 April 2026 01:17:08 +0000 (0:00:00.255) 0:00:19.896 ********** 2026-04-10 01:17:13.781559 | orchestrator | skipping: [testbed-node-3] 2026-04-10 01:17:13.781567 | orchestrator | 2026-04-10 01:17:13.781576 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-10 01:17:13.781585 | orchestrator | Friday 10 April 2026 01:17:09 +0000 (0:00:00.248) 0:00:20.144 ********** 2026-04-10 01:17:13.781593 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-10 01:17:13.781601 | orchestrator | 2026-04-10 01:17:13.781609 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-10 01:17:13.781617 | orchestrator | Friday 10 April 2026 01:17:10 +0000 (0:00:01.789) 0:00:21.934 ********** 2026-04-10 01:17:13.781625 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-10 01:17:13.781634 | orchestrator | 2026-04-10 01:17:13.781642 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-10 01:17:13.781650 | orchestrator | Friday 10 April 2026 01:17:11 +0000 (0:00:00.271) 0:00:22.205 ********** 2026-04-10 01:17:13.781658 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-10 01:17:13.781666 | orchestrator | 2026-04-10 01:17:13.781674 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-10 01:17:13.781682 | orchestrator | Friday 10 April 2026 01:17:11 +0000 (0:00:00.264) 0:00:22.470 ********** 2026-04-10 01:17:13.781690 | orchestrator | 2026-04-10 01:17:13.781698 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-10 01:17:13.781706 | orchestrator | Friday 10 April 2026 01:17:11 +0000 (0:00:00.072) 0:00:22.543 ********** 2026-04-10 01:17:13.781715 | orchestrator | 2026-04-10 01:17:13.781723 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-10 01:17:13.781731 | orchestrator | Friday 10 April 2026 01:17:11 +0000 (0:00:00.246) 0:00:22.789 ********** 2026-04-10 01:17:13.781739 | orchestrator | 2026-04-10 01:17:13.781747 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-10 01:17:13.781755 | orchestrator | Friday 10 April 2026 01:17:11 +0000 (0:00:00.071) 0:00:22.861 ********** 2026-04-10 01:17:13.781763 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-10 01:17:13.781772 | orchestrator | 2026-04-10 01:17:13.781780 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-10 01:17:13.781787 | orchestrator | Friday 10 April 2026 01:17:13 +0000 (0:00:01.290) 0:00:24.152 ********** 2026-04-10 01:17:13.781794 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-04-10 01:17:13.781802 | orchestrator |  "msg": [ 2026-04-10 01:17:13.781811 | orchestrator |  "Validator run completed.", 2026-04-10 01:17:13.781820 | orchestrator |  "You can find the report file here:", 2026-04-10 01:17:13.781828 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-04-10T01:16:50+00:00-report.json", 2026-04-10 01:17:13.781838 | orchestrator |  "on the following host:", 2026-04-10 01:17:13.781847 | orchestrator |  "testbed-manager" 2026-04-10 01:17:13.781855 | orchestrator |  ] 2026-04-10 01:17:13.781863 | orchestrator | } 2026-04-10 01:17:13.781872 | orchestrator | 2026-04-10 01:17:13.781880 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 01:17:13.781889 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-10 01:17:13.781899 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-10 01:17:13.781929 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-10 01:17:13.781944 | orchestrator | 2026-04-10 01:17:13.781952 | orchestrator | 2026-04-10 01:17:13.781960 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 01:17:13.782072 | orchestrator | Friday 10 April 2026 01:17:13 +0000 (0:00:00.405) 0:00:24.557 ********** 2026-04-10 01:17:13.782085 | orchestrator | =============================================================================== 2026-04-10 01:17:13.782107 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.96s 2026-04-10 01:17:13.782122 | orchestrator | Aggregate test results step one ----------------------------------------- 1.79s 2026-04-10 01:17:13.782129 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 1.75s 2026-04-10 01:17:13.782137 | orchestrator | Write report file ------------------------------------------------------- 1.29s 2026-04-10 01:17:13.782145 | orchestrator | Get timestamp for report file ------------------------------------------- 1.00s 2026-04-10 01:17:13.782152 | orchestrator | Create report output directory ------------------------------------------ 0.71s 2026-04-10 01:17:13.782160 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.67s 2026-04-10 01:17:13.782167 | orchestrator | Print report file information ------------------------------------------- 0.66s 2026-04-10 01:17:13.782175 | orchestrator | Prepare test data ------------------------------------------------------- 0.50s 2026-04-10 01:17:13.782183 | orchestrator | Fail test if any sub test failed ---------------------------------------- 0.49s 2026-04-10 01:17:13.782191 | orchestrator | Prepare test data ------------------------------------------------------- 0.49s 2026-04-10 01:17:13.782199 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.49s 2026-04-10 01:17:13.782206 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.48s 2026-04-10 01:17:13.782214 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.48s 2026-04-10 01:17:13.782221 | orchestrator | Pass if count of unencrypted OSDs equals count of OSDs ------------------ 0.48s 2026-04-10 01:17:13.782229 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.45s 2026-04-10 01:17:13.782237 | orchestrator | Prepare test data ------------------------------------------------------- 0.45s 2026-04-10 01:17:13.782244 | orchestrator | Calculate OSD devices for each host ------------------------------------- 0.44s 2026-04-10 01:17:13.782251 | orchestrator | Print report file information ------------------------------------------- 0.41s 2026-04-10 01:17:13.782259 | orchestrator | Flush handlers ---------------------------------------------------------- 0.39s 2026-04-10 01:17:13.977009 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-04-10 01:17:13.985016 | orchestrator | + set -e 2026-04-10 01:17:13.986500 | orchestrator | + source /opt/manager-vars.sh 2026-04-10 01:17:13.986553 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-10 01:17:13.986562 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-10 01:17:13.986569 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-10 01:17:13.986575 | orchestrator | ++ CEPH_VERSION=reef 2026-04-10 01:17:13.986583 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-10 01:17:13.986592 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-10 01:17:13.986599 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-10 01:17:13.986607 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-10 01:17:13.986614 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-10 01:17:13.986622 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-10 01:17:13.986629 | orchestrator | ++ export ARA=false 2026-04-10 01:17:13.986636 | orchestrator | ++ ARA=false 2026-04-10 01:17:13.986643 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-10 01:17:13.986650 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-10 01:17:13.986656 | orchestrator | ++ export TEMPEST=true 2026-04-10 01:17:13.986663 | orchestrator | ++ TEMPEST=true 2026-04-10 01:17:13.986669 | orchestrator | ++ export IS_ZUUL=true 2026-04-10 01:17:13.986676 | orchestrator | ++ IS_ZUUL=true 2026-04-10 01:17:13.986683 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.34 2026-04-10 01:17:13.986689 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.34 2026-04-10 01:17:13.986696 | orchestrator | ++ export EXTERNAL_API=false 2026-04-10 01:17:13.986703 | orchestrator | ++ EXTERNAL_API=false 2026-04-10 01:17:13.986709 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-10 01:17:13.986716 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-10 01:17:13.986792 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-10 01:17:13.986800 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-10 01:17:13.986807 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-10 01:17:13.986813 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-10 01:17:13.986820 | orchestrator | + source /etc/os-release 2026-04-10 01:17:13.986826 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-04-10 01:17:13.986833 | orchestrator | ++ NAME=Ubuntu 2026-04-10 01:17:13.986839 | orchestrator | ++ VERSION_ID=24.04 2026-04-10 01:17:13.986847 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-04-10 01:17:13.986853 | orchestrator | ++ VERSION_CODENAME=noble 2026-04-10 01:17:13.986860 | orchestrator | ++ ID=ubuntu 2026-04-10 01:17:13.986865 | orchestrator | ++ ID_LIKE=debian 2026-04-10 01:17:13.986871 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-04-10 01:17:13.986877 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-04-10 01:17:13.986883 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-04-10 01:17:13.986889 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-04-10 01:17:13.986896 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-04-10 01:17:13.986902 | orchestrator | ++ LOGO=ubuntu-logo 2026-04-10 01:17:13.986908 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-04-10 01:17:13.986928 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-04-10 01:17:13.986936 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-10 01:17:14.017927 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-10 01:17:38.520340 | orchestrator | 2026-04-10 01:17:38.520445 | orchestrator | # Status of Elasticsearch 2026-04-10 01:17:38.520458 | orchestrator | 2026-04-10 01:17:38.520465 | orchestrator | + pushd /opt/configuration/contrib 2026-04-10 01:17:38.520501 | orchestrator | + echo 2026-04-10 01:17:38.520510 | orchestrator | + echo '# Status of Elasticsearch' 2026-04-10 01:17:38.520517 | orchestrator | + echo 2026-04-10 01:17:38.520524 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-04-10 01:17:38.679215 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-04-10 01:17:38.679278 | orchestrator | 2026-04-10 01:17:38.679288 | orchestrator | # Status of MariaDB 2026-04-10 01:17:38.679297 | orchestrator | + echo 2026-04-10 01:17:38.679304 | orchestrator | + echo '# Status of MariaDB' 2026-04-10 01:17:38.679310 | orchestrator | + echo 2026-04-10 01:17:38.679317 | orchestrator | 2026-04-10 01:17:38.680006 | orchestrator | ++ semver latest 10.0.0-0 2026-04-10 01:17:38.730929 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-10 01:17:38.731004 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-10 01:17:38.731018 | orchestrator | + osism status database 2026-04-10 01:17:40.279993 | orchestrator | 2026-04-10 01:17:40 | ERROR  | Unable to get ansible vault password 2026-04-10 01:17:40.280060 | orchestrator | 2026-04-10 01:17:40 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-10 01:17:40.280071 | orchestrator | 2026-04-10 01:17:40 | ERROR  | Dropping encrypted entries 2026-04-10 01:17:40.314953 | orchestrator | 2026-04-10 01:17:40 | INFO  | Connecting to MariaDB at 192.168.16.9 as root_shard_0... 2026-04-10 01:17:40.323180 | orchestrator | 2026-04-10 01:17:40 | INFO  | Cluster Status: Primary 2026-04-10 01:17:40.323227 | orchestrator | 2026-04-10 01:17:40 | INFO  | Connected: ON 2026-04-10 01:17:40.323233 | orchestrator | 2026-04-10 01:17:40 | INFO  | Ready: ON 2026-04-10 01:17:40.323238 | orchestrator | 2026-04-10 01:17:40 | INFO  | Cluster Size: 3 2026-04-10 01:17:40.323242 | orchestrator | 2026-04-10 01:17:40 | INFO  | Local State: Synced 2026-04-10 01:17:40.323246 | orchestrator | 2026-04-10 01:17:40 | INFO  | Cluster State UUID: cd919d9f-3477-11f1-aaad-de4483beb2d9 2026-04-10 01:17:40.323251 | orchestrator | 2026-04-10 01:17:40 | INFO  | Cluster Members: 192.168.16.10:3306,192.168.16.11:3306,192.168.16.12:3306 2026-04-10 01:17:40.323306 | orchestrator | 2026-04-10 01:17:40 | INFO  | Galera Version: 26.4.25(r7387a566) 2026-04-10 01:17:40.323312 | orchestrator | 2026-04-10 01:17:40 | INFO  | Local Node UUID: 018b079d-3478-11f1-8de9-f675553bf821 2026-04-10 01:17:40.323316 | orchestrator | 2026-04-10 01:17:40 | INFO  | Flow Control Paused: 0.00% 2026-04-10 01:17:40.323321 | orchestrator | 2026-04-10 01:17:40 | INFO  | Recv Queue Avg: 0 2026-04-10 01:17:40.323330 | orchestrator | 2026-04-10 01:17:40 | INFO  | Send Queue Avg: 0.000454339 2026-04-10 01:17:40.323334 | orchestrator | 2026-04-10 01:17:40 | INFO  | Transactions: 4361 local commits, 6546 replicated, 75 received 2026-04-10 01:17:40.323337 | orchestrator | 2026-04-10 01:17:40 | INFO  | Conflicts: 0 cert failures, 0 bf aborts 2026-04-10 01:17:40.323341 | orchestrator | 2026-04-10 01:17:40 | INFO  | MariaDB Uptime: 21 minutes, 55 seconds 2026-04-10 01:17:40.323469 | orchestrator | 2026-04-10 01:17:40 | INFO  | Threads: 132 connected, 1 running 2026-04-10 01:17:40.323481 | orchestrator | 2026-04-10 01:17:40 | INFO  | Queries: 212411 total, 0 slow 2026-04-10 01:17:40.323488 | orchestrator | 2026-04-10 01:17:40 | INFO  | Aborted Connects: 137 2026-04-10 01:17:40.323635 | orchestrator | 2026-04-10 01:17:40 | INFO  | MariaDB Galera Cluster validation PASSED 2026-04-10 01:17:40.576296 | orchestrator | 2026-04-10 01:17:40.576353 | orchestrator | # Status of Prometheus 2026-04-10 01:17:40.576363 | orchestrator | 2026-04-10 01:17:40.576371 | orchestrator | + echo 2026-04-10 01:17:40.576379 | orchestrator | + echo '# Status of Prometheus' 2026-04-10 01:17:40.576417 | orchestrator | + echo 2026-04-10 01:17:40.576424 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-04-10 01:17:40.627192 | orchestrator | Unauthorized 2026-04-10 01:17:40.630318 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-04-10 01:17:40.677676 | orchestrator | Unauthorized 2026-04-10 01:17:40.680482 | orchestrator | 2026-04-10 01:17:40.680533 | orchestrator | # Status of RabbitMQ 2026-04-10 01:17:40.680543 | orchestrator | 2026-04-10 01:17:40.680552 | orchestrator | + echo 2026-04-10 01:17:40.680560 | orchestrator | + echo '# Status of RabbitMQ' 2026-04-10 01:17:40.680568 | orchestrator | + echo 2026-04-10 01:17:40.681893 | orchestrator | ++ semver latest 10.0.0-0 2026-04-10 01:17:40.738312 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-10 01:17:40.738429 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-10 01:17:40.738445 | orchestrator | + osism status messaging 2026-04-10 01:17:48.059578 | orchestrator | 2026-04-10 01:17:48 | ERROR  | Unable to get ansible vault password 2026-04-10 01:17:48.059654 | orchestrator | 2026-04-10 01:17:48 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-10 01:17:48.059663 | orchestrator | 2026-04-10 01:17:48 | ERROR  | Dropping encrypted entries 2026-04-10 01:17:48.094072 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-0] Connecting to RabbitMQ Management API at 192.168.16.10:15672 as openstack... 2026-04-10 01:17:48.165357 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-0] RabbitMQ Version: 3.13.7 2026-04-10 01:17:48.165622 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-0] Erlang Version: 26.2.5.15 2026-04-10 01:17:48.165646 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-0] Cluster Name: rabbit@testbed-node-0 2026-04-10 01:17:48.165655 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-0] Cluster Size: 3 2026-04-10 01:17:48.165666 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-0] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-10 01:17:48.165676 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-0] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-10 01:17:48.165711 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-0] Partitions: None (healthy) 2026-04-10 01:17:48.165719 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-0] Connections: 210, Channels: 209, Queues: 173 2026-04-10 01:17:48.165728 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-0] Messages: 229 total, 229 ready, 0 unacked 2026-04-10 01:17:48.165736 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-0] Message Rates: 6.4/s publish, 7.2/s deliver 2026-04-10 01:17:48.165795 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-0] Disk Free: 56.3 GB (limit: 0.0 GB) 2026-04-10 01:17:48.165803 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-0] Memory Used: 0.18 GB (limit: 12.54 GB) 2026-04-10 01:17:48.165818 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-0] File Descriptors: 119/1024 2026-04-10 01:17:48.165823 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-0] Sockets: 71/832 2026-04-10 01:17:48.165828 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-1] Connecting to RabbitMQ Management API at 192.168.16.11:15672 as openstack... 2026-04-10 01:17:48.233302 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-1] RabbitMQ Version: 3.13.7 2026-04-10 01:17:48.233371 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-1] Erlang Version: 26.2.5.15 2026-04-10 01:17:48.233377 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-1] Cluster Name: rabbit@testbed-node-1 2026-04-10 01:17:48.233475 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-1] Cluster Size: 3 2026-04-10 01:17:48.233488 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-1] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-10 01:17:48.233864 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-1] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-10 01:17:48.234007 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-1] Partitions: None (healthy) 2026-04-10 01:17:48.234256 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-1] Connections: 210, Channels: 209, Queues: 173 2026-04-10 01:17:48.234514 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-1] Messages: 229 total, 229 ready, 0 unacked 2026-04-10 01:17:48.234760 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-1] Message Rates: 6.4/s publish, 7.2/s deliver 2026-04-10 01:17:48.235136 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-1] Disk Free: 56.5 GB (limit: 0.0 GB) 2026-04-10 01:17:48.235154 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-1] Memory Used: 0.18 GB (limit: 12.54 GB) 2026-04-10 01:17:48.235484 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-1] File Descriptors: 125/1024 2026-04-10 01:17:48.235493 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-1] Sockets: 79/832 2026-04-10 01:17:48.235932 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-2] Connecting to RabbitMQ Management API at 192.168.16.12:15672 as openstack... 2026-04-10 01:17:48.302064 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-2] RabbitMQ Version: 3.13.7 2026-04-10 01:17:48.302132 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-2] Erlang Version: 26.2.5.15 2026-04-10 01:17:48.302139 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-2] Cluster Name: rabbit@testbed-node-2 2026-04-10 01:17:48.302145 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-2] Cluster Size: 3 2026-04-10 01:17:48.302150 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-2] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-10 01:17:48.302183 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-2] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-10 01:17:48.302187 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-2] Partitions: None (healthy) 2026-04-10 01:17:48.302191 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-2] Connections: 210, Channels: 209, Queues: 173 2026-04-10 01:17:48.302195 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-2] Messages: 229 total, 229 ready, 0 unacked 2026-04-10 01:17:48.302358 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-2] Message Rates: 6.4/s publish, 7.2/s deliver 2026-04-10 01:17:48.302486 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-2] Disk Free: 56.5 GB (limit: 0.0 GB) 2026-04-10 01:17:48.302994 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-2] Memory Used: 0.17 GB (limit: 12.54 GB) 2026-04-10 01:17:48.303046 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-2] File Descriptors: 106/1024 2026-04-10 01:17:48.303137 | orchestrator | 2026-04-10 01:17:48 | INFO  | [testbed-node-2] Sockets: 60/832 2026-04-10 01:17:48.303149 | orchestrator | 2026-04-10 01:17:48 | INFO  | RabbitMQ Cluster validation PASSED 2026-04-10 01:17:48.602476 | orchestrator | 2026-04-10 01:17:48.602571 | orchestrator | # Status of Redis 2026-04-10 01:17:48.602582 | orchestrator | 2026-04-10 01:17:48.602588 | orchestrator | + echo 2026-04-10 01:17:48.602595 | orchestrator | + echo '# Status of Redis' 2026-04-10 01:17:48.602603 | orchestrator | + echo 2026-04-10 01:17:48.602612 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-04-10 01:17:48.607924 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001827s;;;0.000000;10.000000 2026-04-10 01:17:48.608597 | orchestrator | 2026-04-10 01:17:48.608627 | orchestrator | # Create backup of MariaDB database 2026-04-10 01:17:48.608634 | orchestrator | 2026-04-10 01:17:48.608639 | orchestrator | + popd 2026-04-10 01:17:48.608644 | orchestrator | + echo 2026-04-10 01:17:48.608649 | orchestrator | + echo '# Create backup of MariaDB database' 2026-04-10 01:17:48.608654 | orchestrator | + echo 2026-04-10 01:17:48.608659 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-04-10 01:17:49.947874 | orchestrator | 2026-04-10 01:17:49 | INFO  | Prepare task for execution of mariadb_backup. 2026-04-10 01:17:50.015115 | orchestrator | 2026-04-10 01:17:50 | INFO  | Task eb997b4f-7098-4156-892f-1fb28c4e390a (mariadb_backup) was prepared for execution. 2026-04-10 01:17:50.015228 | orchestrator | 2026-04-10 01:17:50 | INFO  | It takes a moment until task eb997b4f-7098-4156-892f-1fb28c4e390a (mariadb_backup) has been started and output is visible here. 2026-04-10 01:18:50.720781 | orchestrator | 2026-04-10 01:18:50.720877 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-10 01:18:50.720890 | orchestrator | 2026-04-10 01:18:50.720897 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-10 01:18:50.720904 | orchestrator | Friday 10 April 2026 01:17:53 +0000 (0:00:00.245) 0:00:00.245 ********** 2026-04-10 01:18:50.720912 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:18:50.720917 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:18:50.720921 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:18:50.720925 | orchestrator | 2026-04-10 01:18:50.720929 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-10 01:18:50.720934 | orchestrator | Friday 10 April 2026 01:17:53 +0000 (0:00:00.312) 0:00:00.558 ********** 2026-04-10 01:18:50.720938 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-10 01:18:50.720942 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-10 01:18:50.720946 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-10 01:18:50.720965 | orchestrator | 2026-04-10 01:18:50.720969 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-10 01:18:50.720973 | orchestrator | 2026-04-10 01:18:50.720977 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-10 01:18:50.720981 | orchestrator | Friday 10 April 2026 01:17:53 +0000 (0:00:00.407) 0:00:00.965 ********** 2026-04-10 01:18:50.720985 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-10 01:18:50.720989 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-10 01:18:50.720994 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-10 01:18:50.720997 | orchestrator | 2026-04-10 01:18:50.721001 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-10 01:18:50.721006 | orchestrator | Friday 10 April 2026 01:17:54 +0000 (0:00:00.384) 0:00:01.350 ********** 2026-04-10 01:18:50.721011 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-10 01:18:50.721016 | orchestrator | 2026-04-10 01:18:50.721020 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-04-10 01:18:50.721024 | orchestrator | Friday 10 April 2026 01:17:55 +0000 (0:00:00.656) 0:00:02.006 ********** 2026-04-10 01:18:50.721027 | orchestrator | ok: [testbed-node-1] 2026-04-10 01:18:50.721031 | orchestrator | ok: [testbed-node-0] 2026-04-10 01:18:50.721035 | orchestrator | ok: [testbed-node-2] 2026-04-10 01:18:50.721039 | orchestrator | 2026-04-10 01:18:50.721043 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-04-10 01:18:50.721047 | orchestrator | Friday 10 April 2026 01:17:58 +0000 (0:00:03.291) 0:00:05.298 ********** 2026-04-10 01:18:50.721050 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:18:50.721056 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:18:50.721060 | orchestrator | changed: [testbed-node-0] 2026-04-10 01:18:50.721064 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-10 01:18:50.721067 | orchestrator | 2026-04-10 01:18:50.721072 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-10 01:18:50.721076 | orchestrator | skipping: no hosts matched 2026-04-10 01:18:50.721080 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-04-10 01:18:50.721083 | orchestrator | 2026-04-10 01:18:50.721087 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-10 01:18:50.721091 | orchestrator | skipping: no hosts matched 2026-04-10 01:18:50.721095 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-10 01:18:50.721099 | orchestrator | mariadb_bootstrap_restart 2026-04-10 01:18:50.721103 | orchestrator | 2026-04-10 01:18:50.721107 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-10 01:18:50.721111 | orchestrator | skipping: no hosts matched 2026-04-10 01:18:50.721114 | orchestrator | 2026-04-10 01:18:50.721118 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-10 01:18:50.721122 | orchestrator | 2026-04-10 01:18:50.721126 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-10 01:18:50.721130 | orchestrator | Friday 10 April 2026 01:18:49 +0000 (0:00:51.574) 0:00:56.872 ********** 2026-04-10 01:18:50.721145 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:18:50.721149 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:18:50.721153 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:18:50.721157 | orchestrator | 2026-04-10 01:18:50.721161 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-10 01:18:50.721164 | orchestrator | Friday 10 April 2026 01:18:50 +0000 (0:00:00.300) 0:00:57.173 ********** 2026-04-10 01:18:50.721168 | orchestrator | skipping: [testbed-node-0] 2026-04-10 01:18:50.721172 | orchestrator | skipping: [testbed-node-1] 2026-04-10 01:18:50.721176 | orchestrator | skipping: [testbed-node-2] 2026-04-10 01:18:50.721180 | orchestrator | 2026-04-10 01:18:50.721183 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 01:18:50.721192 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-10 01:18:50.721197 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-10 01:18:50.721202 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-10 01:18:50.721205 | orchestrator | 2026-04-10 01:18:50.721209 | orchestrator | 2026-04-10 01:18:50.721213 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 01:18:50.721217 | orchestrator | Friday 10 April 2026 01:18:50 +0000 (0:00:00.220) 0:00:57.393 ********** 2026-04-10 01:18:50.721221 | orchestrator | =============================================================================== 2026-04-10 01:18:50.721224 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 51.57s 2026-04-10 01:18:50.721240 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.29s 2026-04-10 01:18:50.721244 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.66s 2026-04-10 01:18:50.721248 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.41s 2026-04-10 01:18:50.721252 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.38s 2026-04-10 01:18:50.721256 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-04-10 01:18:50.721260 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.30s 2026-04-10 01:18:50.721263 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.22s 2026-04-10 01:18:50.918632 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-04-10 01:18:50.928059 | orchestrator | + set -e 2026-04-10 01:18:50.928109 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-10 01:18:50.928116 | orchestrator | ++ export INTERACTIVE=false 2026-04-10 01:18:50.928121 | orchestrator | ++ INTERACTIVE=false 2026-04-10 01:18:50.928125 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-10 01:18:50.928129 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-10 01:18:50.928133 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-10 01:18:50.929222 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-10 01:18:50.936157 | orchestrator | 2026-04-10 01:18:50.936224 | orchestrator | # OpenStack endpoints 2026-04-10 01:18:50.936237 | orchestrator | 2026-04-10 01:18:50.936267 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-10 01:18:50.936278 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-10 01:18:50.936287 | orchestrator | + export OS_CLOUD=admin 2026-04-10 01:18:50.936296 | orchestrator | + OS_CLOUD=admin 2026-04-10 01:18:50.936305 | orchestrator | + echo 2026-04-10 01:18:50.936314 | orchestrator | + echo '# OpenStack endpoints' 2026-04-10 01:18:50.936323 | orchestrator | + echo 2026-04-10 01:18:50.936333 | orchestrator | + openstack endpoint list 2026-04-10 01:18:54.335119 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-10 01:18:54.335237 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-04-10 01:18:54.335245 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-10 01:18:54.335250 | orchestrator | | 0f3523e212c04a36a9f8a1b8dfa30483 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-04-10 01:18:54.335254 | orchestrator | | 27045e61361f40a0aafcb8a02f0c1a38 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-04-10 01:18:54.335278 | orchestrator | | 3352b1dfa7e94631a4d893968897de49 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-04-10 01:18:54.335303 | orchestrator | | 388d67e56c014f58a58e731330476db8 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-04-10 01:18:54.335307 | orchestrator | | 3e3085fc58ff4d6d8924b1aa1efd1ac1 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-04-10 01:18:54.335311 | orchestrator | | 4eb80564e2a94c9f927a2afc3e040f78 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-10 01:18:54.335315 | orchestrator | | 53f647b184c542f2a457ee0665ff46b3 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-04-10 01:18:54.335318 | orchestrator | | 5af73d98faf24656ae3c09e2eb0d37c6 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-04-10 01:18:54.335323 | orchestrator | | 65697433bb614f7fa75e0b931fad1283 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-04-10 01:18:54.335326 | orchestrator | | 685e525361f244498e94d8a8e190630b | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-04-10 01:18:54.335330 | orchestrator | | 6dc0909c938e4c5ea256b16f896ff9be | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-04-10 01:18:54.335334 | orchestrator | | 771cbfd4cbe64b2a99d4e5eccb70f944 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-04-10 01:18:54.335338 | orchestrator | | 7871432efbc34831bb51e023f1739517 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-10 01:18:54.335342 | orchestrator | | 804a1036439849328fd15dc575a7d36a | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-10 01:18:54.335414 | orchestrator | | 80910cf381374fa087fea794ffc2318e | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-04-10 01:18:54.335422 | orchestrator | | 97756533b0c049c19751100c2b04507b | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-04-10 01:18:54.335426 | orchestrator | | a5790e09fea344579de2e4e0152fff2a | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-04-10 01:18:54.335429 | orchestrator | | b030e559677a45fcad090422590736e9 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-04-10 01:18:54.335433 | orchestrator | | b34f5f8ee4984d928f1f40a4e84d10c1 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-04-10 01:18:54.335437 | orchestrator | | c6fdb88f08a54e6d95ec55d3c2b50a89 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-04-10 01:18:54.335458 | orchestrator | | eba42efb70fb479cb707af9bf9d69e86 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-10 01:18:54.335462 | orchestrator | | f842c59ac3b14594a154e7f91134e338 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-04-10 01:18:54.335465 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-10 01:18:54.621481 | orchestrator | 2026-04-10 01:18:54.621590 | orchestrator | # Cinder 2026-04-10 01:18:54.621597 | orchestrator | 2026-04-10 01:18:54.621601 | orchestrator | + echo 2026-04-10 01:18:54.621606 | orchestrator | + echo '# Cinder' 2026-04-10 01:18:54.621610 | orchestrator | + echo 2026-04-10 01:18:54.621615 | orchestrator | + openstack volume service list 2026-04-10 01:18:58.423757 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-10 01:18:58.423859 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-04-10 01:18:58.423867 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-10 01:18:58.423894 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-10T01:18:51.000000 | 2026-04-10 01:18:58.423901 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-10T01:18:51.000000 | 2026-04-10 01:18:58.423906 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-10T01:18:52.000000 | 2026-04-10 01:18:58.423912 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-04-10T01:18:51.000000 | 2026-04-10 01:18:58.423918 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-04-10T01:18:56.000000 | 2026-04-10 01:18:58.423924 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-04-10T01:18:56.000000 | 2026-04-10 01:18:58.423930 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-04-10T01:18:48.000000 | 2026-04-10 01:18:58.423936 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-04-10T01:18:50.000000 | 2026-04-10 01:18:58.423943 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-04-10T01:18:50.000000 | 2026-04-10 01:18:58.423949 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-10 01:18:58.667289 | orchestrator | 2026-04-10 01:18:58.667431 | orchestrator | # Neutron 2026-04-10 01:18:58.667440 | orchestrator | 2026-04-10 01:18:58.667445 | orchestrator | + echo 2026-04-10 01:18:58.667449 | orchestrator | + echo '# Neutron' 2026-04-10 01:18:58.667454 | orchestrator | + echo 2026-04-10 01:18:58.667458 | orchestrator | + openstack network agent list 2026-04-10 01:19:01.415317 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-10 01:19:01.415446 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-04-10 01:19:01.415452 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-10 01:19:01.415457 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-04-10 01:19:01.415461 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-04-10 01:19:01.415465 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-04-10 01:19:01.415469 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-04-10 01:19:01.415473 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-04-10 01:19:01.415477 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-04-10 01:19:01.415480 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-10 01:19:01.415506 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-10 01:19:01.415510 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-10 01:19:01.415513 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-10 01:19:01.666516 | orchestrator | + openstack network service provider list 2026-04-10 01:19:04.292420 | orchestrator | +---------------+------+---------+ 2026-04-10 01:19:04.292496 | orchestrator | | Service Type | Name | Default | 2026-04-10 01:19:04.292503 | orchestrator | +---------------+------+---------+ 2026-04-10 01:19:04.292507 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-04-10 01:19:04.292512 | orchestrator | +---------------+------+---------+ 2026-04-10 01:19:04.533963 | orchestrator | 2026-04-10 01:19:04.534100 | orchestrator | # Nova 2026-04-10 01:19:04.534113 | orchestrator | 2026-04-10 01:19:04.534120 | orchestrator | + echo 2026-04-10 01:19:04.534127 | orchestrator | + echo '# Nova' 2026-04-10 01:19:04.534135 | orchestrator | + echo 2026-04-10 01:19:04.534142 | orchestrator | + openstack compute service list 2026-04-10 01:19:07.299597 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-10 01:19:07.299697 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-04-10 01:19:07.299710 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-10 01:19:07.299716 | orchestrator | | 84a9fee8-106d-4350-9ada-b5cfa6b6e762 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-10T01:19:04.000000 | 2026-04-10 01:19:07.299723 | orchestrator | | f5fb2b7f-5f95-4a99-bd6b-27b036352de8 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-10T01:18:59.000000 | 2026-04-10 01:19:07.299748 | orchestrator | | 34a0c1f9-6da4-47fb-8234-5b99bb2b40ba | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-10T01:19:03.000000 | 2026-04-10 01:19:07.299755 | orchestrator | | 410f25c2-2911-4905-93d8-f7a8c6a15ff4 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-04-10T01:18:57.000000 | 2026-04-10 01:19:07.299760 | orchestrator | | c2a493da-34e6-4873-8fd3-4946075701d1 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-04-10T01:18:57.000000 | 2026-04-10 01:19:07.299767 | orchestrator | | 2cf8fa51-5f1f-40d3-90c6-67d4a96e6529 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-04-10T01:18:57.000000 | 2026-04-10 01:19:07.299773 | orchestrator | | 69dd36fd-2855-4b88-b4c1-e89b524a7ac3 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-04-10T01:19:00.000000 | 2026-04-10 01:19:07.299780 | orchestrator | | 8d34ff3b-5885-4ab7-af36-352ea3faf72b | nova-compute | testbed-node-3 | nova | enabled | up | 2026-04-10T01:19:00.000000 | 2026-04-10 01:19:07.299787 | orchestrator | | cbfbf0e3-c965-4cce-961c-2ced0d340de9 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-04-10T01:19:01.000000 | 2026-04-10 01:19:07.299793 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-10 01:19:07.538989 | orchestrator | + openstack hypervisor list 2026-04-10 01:19:10.168592 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-10 01:19:10.168675 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-04-10 01:19:10.168681 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-10 01:19:10.168686 | orchestrator | | e4c02eb1-4b0c-4d4f-89a9-dc08d1352d37 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-04-10 01:19:10.168690 | orchestrator | | b3773364-5030-43da-812b-f9e86dd9d994 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-04-10 01:19:10.168714 | orchestrator | | 2fb5c25d-8b81-4ce7-96a0-f5d960625e18 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-04-10 01:19:10.168718 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-10 01:19:10.456936 | orchestrator | 2026-04-10 01:19:10.457007 | orchestrator | # Run OpenStack test play 2026-04-10 01:19:10.457014 | orchestrator | 2026-04-10 01:19:10.457019 | orchestrator | + echo 2026-04-10 01:19:10.457023 | orchestrator | + echo '# Run OpenStack test play' 2026-04-10 01:19:10.457029 | orchestrator | + echo 2026-04-10 01:19:10.457034 | orchestrator | + osism apply --environment openstack test 2026-04-10 01:19:11.701120 | orchestrator | 2026-04-10 01:19:11 | INFO  | Trying to run play test in environment openstack 2026-04-10 01:19:11.728576 | orchestrator | 2026-04-10 01:19:11 | INFO  | Prepare task for execution of test. 2026-04-10 01:19:11.796023 | orchestrator | 2026-04-10 01:19:11 | INFO  | Task 003d9075-5214-41e0-86ca-fc73f535c7aa (test) was prepared for execution. 2026-04-10 01:19:11.796093 | orchestrator | 2026-04-10 01:19:11 | INFO  | It takes a moment until task 003d9075-5214-41e0-86ca-fc73f535c7aa (test) has been started and output is visible here. 2026-04-10 01:22:25.326117 | orchestrator | 2026-04-10 01:22:25.326172 | orchestrator | PLAY [Create test project] ***************************************************** 2026-04-10 01:22:25.326178 | orchestrator | 2026-04-10 01:22:25.326183 | orchestrator | TASK [Create test domain] ****************************************************** 2026-04-10 01:22:25.326187 | orchestrator | Friday 10 April 2026 01:19:15 +0000 (0:00:00.119) 0:00:00.119 ********** 2026-04-10 01:22:25.326191 | orchestrator | changed: [localhost] 2026-04-10 01:22:25.326195 | orchestrator | 2026-04-10 01:22:25.326199 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-04-10 01:22:25.326203 | orchestrator | Friday 10 April 2026 01:19:18 +0000 (0:00:03.760) 0:00:03.880 ********** 2026-04-10 01:22:25.326207 | orchestrator | changed: [localhost] 2026-04-10 01:22:25.326211 | orchestrator | 2026-04-10 01:22:25.326237 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-04-10 01:22:25.326242 | orchestrator | Friday 10 April 2026 01:19:23 +0000 (0:00:04.213) 0:00:08.093 ********** 2026-04-10 01:22:25.326246 | orchestrator | changed: [localhost] 2026-04-10 01:22:25.326250 | orchestrator | 2026-04-10 01:22:25.326254 | orchestrator | TASK [Create test project] ***************************************************** 2026-04-10 01:22:25.326258 | orchestrator | Friday 10 April 2026 01:19:29 +0000 (0:00:06.356) 0:00:14.450 ********** 2026-04-10 01:22:25.326261 | orchestrator | changed: [localhost] 2026-04-10 01:22:25.326265 | orchestrator | 2026-04-10 01:22:25.326269 | orchestrator | TASK [Create test user] ******************************************************** 2026-04-10 01:22:25.326273 | orchestrator | Friday 10 April 2026 01:19:33 +0000 (0:00:04.194) 0:00:18.644 ********** 2026-04-10 01:22:25.326277 | orchestrator | changed: [localhost] 2026-04-10 01:22:25.326281 | orchestrator | 2026-04-10 01:22:25.326285 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-04-10 01:22:25.326289 | orchestrator | Friday 10 April 2026 01:19:37 +0000 (0:00:04.132) 0:00:22.777 ********** 2026-04-10 01:22:25.326293 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-04-10 01:22:25.326297 | orchestrator | changed: [localhost] => (item=member) 2026-04-10 01:22:25.326302 | orchestrator | changed: [localhost] => (item=creator) 2026-04-10 01:22:25.326305 | orchestrator | 2026-04-10 01:22:25.326309 | orchestrator | TASK [Create test server group] ************************************************ 2026-04-10 01:22:25.326314 | orchestrator | Friday 10 April 2026 01:19:49 +0000 (0:00:11.530) 0:00:34.308 ********** 2026-04-10 01:22:25.326318 | orchestrator | changed: [localhost] 2026-04-10 01:22:25.326321 | orchestrator | 2026-04-10 01:22:25.326325 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-04-10 01:22:25.326329 | orchestrator | Friday 10 April 2026 01:19:54 +0000 (0:00:04.819) 0:00:39.127 ********** 2026-04-10 01:22:25.326333 | orchestrator | changed: [localhost] 2026-04-10 01:22:25.326348 | orchestrator | 2026-04-10 01:22:25.326352 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-04-10 01:22:25.326356 | orchestrator | Friday 10 April 2026 01:19:59 +0000 (0:00:04.998) 0:00:44.125 ********** 2026-04-10 01:22:25.326359 | orchestrator | changed: [localhost] 2026-04-10 01:22:25.326363 | orchestrator | 2026-04-10 01:22:25.326375 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-04-10 01:22:25.326379 | orchestrator | Friday 10 April 2026 01:20:03 +0000 (0:00:04.202) 0:00:48.328 ********** 2026-04-10 01:22:25.326387 | orchestrator | changed: [localhost] 2026-04-10 01:22:25.326391 | orchestrator | 2026-04-10 01:22:25.326395 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-04-10 01:22:25.326399 | orchestrator | Friday 10 April 2026 01:20:07 +0000 (0:00:03.909) 0:00:52.238 ********** 2026-04-10 01:22:25.326403 | orchestrator | changed: [localhost] 2026-04-10 01:22:25.326407 | orchestrator | 2026-04-10 01:22:25.326410 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-04-10 01:22:25.326414 | orchestrator | Friday 10 April 2026 01:20:11 +0000 (0:00:04.031) 0:00:56.270 ********** 2026-04-10 01:22:25.326418 | orchestrator | changed: [localhost] 2026-04-10 01:22:25.326422 | orchestrator | 2026-04-10 01:22:25.326426 | orchestrator | TASK [Create test networks] **************************************************** 2026-04-10 01:22:25.326429 | orchestrator | Friday 10 April 2026 01:20:15 +0000 (0:00:04.156) 0:01:00.426 ********** 2026-04-10 01:22:25.326433 | orchestrator | changed: [localhost] => (item={'name': 'test-1'}) 2026-04-10 01:22:25.326437 | orchestrator | changed: [localhost] => (item={'name': 'test-2'}) 2026-04-10 01:22:25.326441 | orchestrator | changed: [localhost] => (item={'name': 'test-3'}) 2026-04-10 01:22:25.326445 | orchestrator | 2026-04-10 01:22:25.326448 | orchestrator | TASK [Create test subnets] ***************************************************** 2026-04-10 01:22:25.326452 | orchestrator | Friday 10 April 2026 01:20:29 +0000 (0:00:13.644) 0:01:14.070 ********** 2026-04-10 01:22:25.326456 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'subnet': 'subnet-test-1', 'cidr': '192.168.200.0/24'}) 2026-04-10 01:22:25.326461 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'subnet': 'subnet-test-2', 'cidr': '192.168.201.0/24'}) 2026-04-10 01:22:25.326465 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'subnet': 'subnet-test-3', 'cidr': '192.168.202.0/24'}) 2026-04-10 01:22:25.326468 | orchestrator | 2026-04-10 01:22:25.326472 | orchestrator | TASK [Create test routers] ***************************************************** 2026-04-10 01:22:25.326476 | orchestrator | Friday 10 April 2026 01:20:45 +0000 (0:00:16.580) 0:01:30.651 ********** 2026-04-10 01:22:25.326480 | orchestrator | changed: [localhost] => (item={'router': 'router-test-1', 'subnet': 'subnet-test-1'}) 2026-04-10 01:22:25.326484 | orchestrator | changed: [localhost] => (item={'router': 'router-test-2', 'subnet': 'subnet-test-2'}) 2026-04-10 01:22:25.326488 | orchestrator | changed: [localhost] => (item={'router': 'router-test-3', 'subnet': 'subnet-test-3'}) 2026-04-10 01:22:25.326491 | orchestrator | 2026-04-10 01:22:25.326495 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-04-10 01:22:25.326499 | orchestrator | 2026-04-10 01:22:25.326503 | orchestrator | TASK [Get test server group] *************************************************** 2026-04-10 01:22:25.326513 | orchestrator | Friday 10 April 2026 01:21:18 +0000 (0:00:33.086) 0:02:03.737 ********** 2026-04-10 01:22:25.326518 | orchestrator | ok: [localhost] 2026-04-10 01:22:25.326522 | orchestrator | 2026-04-10 01:22:25.326526 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-04-10 01:22:25.326529 | orchestrator | Friday 10 April 2026 01:21:22 +0000 (0:00:03.584) 0:02:07.322 ********** 2026-04-10 01:22:25.326541 | orchestrator | skipping: [localhost] 2026-04-10 01:22:25.326545 | orchestrator | 2026-04-10 01:22:25.326549 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-04-10 01:22:25.326553 | orchestrator | Friday 10 April 2026 01:21:22 +0000 (0:00:00.033) 0:02:07.355 ********** 2026-04-10 01:22:25.326560 | orchestrator | skipping: [localhost] 2026-04-10 01:22:25.326564 | orchestrator | 2026-04-10 01:22:25.326568 | orchestrator | TASK [Delete test instances] *************************************************** 2026-04-10 01:22:25.326572 | orchestrator | Friday 10 April 2026 01:21:22 +0000 (0:00:00.033) 0:02:07.389 ********** 2026-04-10 01:22:25.326575 | orchestrator | skipping: [localhost] => (item={'name': 'test-4', 'network': 'test-3'})  2026-04-10 01:22:25.326579 | orchestrator | skipping: [localhost] => (item={'name': 'test-3', 'network': 'test-2'})  2026-04-10 01:22:25.326583 | orchestrator | skipping: [localhost] => (item={'name': 'test-2', 'network': 'test-2'})  2026-04-10 01:22:25.326587 | orchestrator | skipping: [localhost] => (item={'name': 'test-1', 'network': 'test-1'})  2026-04-10 01:22:25.326593 | orchestrator | skipping: [localhost] => (item={'name': 'test', 'network': 'test-1'})  2026-04-10 01:22:25.326600 | orchestrator | skipping: [localhost] 2026-04-10 01:22:25.326610 | orchestrator | 2026-04-10 01:22:25.326616 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-04-10 01:22:25.326631 | orchestrator | Friday 10 April 2026 01:21:22 +0000 (0:00:00.145) 0:02:07.534 ********** 2026-04-10 01:22:25.326637 | orchestrator | skipping: [localhost] 2026-04-10 01:22:25.326643 | orchestrator | 2026-04-10 01:22:25.326649 | orchestrator | TASK [Create test instances] *************************************************** 2026-04-10 01:22:25.326655 | orchestrator | Friday 10 April 2026 01:21:22 +0000 (0:00:00.134) 0:02:07.669 ********** 2026-04-10 01:22:25.326662 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-10 01:22:25.326668 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-10 01:22:25.326674 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-10 01:22:25.326683 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-10 01:22:25.326691 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-10 01:22:25.326695 | orchestrator | 2026-04-10 01:22:25.326698 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-04-10 01:22:25.326702 | orchestrator | Friday 10 April 2026 01:21:27 +0000 (0:00:04.431) 0:02:12.101 ********** 2026-04-10 01:22:25.326706 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-04-10 01:22:25.326712 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-04-10 01:22:25.326716 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-04-10 01:22:25.326721 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-04-10 01:22:25.326725 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (56 retries left). 2026-04-10 01:22:25.326731 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j490508632625.2776', 'results_file': '/ansible/.ansible_async/j490508632625.2776', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-10 01:22:25.326737 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j650027366793.2801', 'results_file': '/ansible/.ansible_async/j650027366793.2801', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-10 01:22:25.326741 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j788471879885.2826', 'results_file': '/ansible/.ansible_async/j788471879885.2826', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-10 01:22:25.326746 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j123323879689.2851', 'results_file': '/ansible/.ansible_async/j123323879689.2851', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-10 01:22:25.326755 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j858729305755.2876', 'results_file': '/ansible/.ansible_async/j858729305755.2876', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-10 01:22:25.326759 | orchestrator | 2026-04-10 01:22:25.326764 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-04-10 01:22:25.326768 | orchestrator | Friday 10 April 2026 01:22:24 +0000 (0:00:57.310) 0:03:09.411 ********** 2026-04-10 01:22:25.326777 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-10 01:23:36.684582 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-10 01:23:36.684666 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-10 01:23:36.684673 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-10 01:23:36.684678 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-10 01:23:36.684682 | orchestrator | 2026-04-10 01:23:36.684687 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-04-10 01:23:36.684691 | orchestrator | Friday 10 April 2026 01:22:28 +0000 (0:00:04.456) 0:03:13.867 ********** 2026-04-10 01:23:36.684696 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-04-10 01:23:36.684703 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j917386128676.2986', 'results_file': '/ansible/.ansible_async/j917386128676.2986', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-10 01:23:36.684709 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j391588471620.3011', 'results_file': '/ansible/.ansible_async/j391588471620.3011', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-10 01:23:36.684714 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j45753940064.3036', 'results_file': '/ansible/.ansible_async/j45753940064.3036', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-10 01:23:36.684728 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j74256115193.3061', 'results_file': '/ansible/.ansible_async/j74256115193.3061', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-10 01:23:36.684732 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j406066306176.3086', 'results_file': '/ansible/.ansible_async/j406066306176.3086', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-10 01:23:36.684737 | orchestrator | 2026-04-10 01:23:36.684741 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-04-10 01:23:36.684745 | orchestrator | Friday 10 April 2026 01:22:38 +0000 (0:00:09.389) 0:03:23.257 ********** 2026-04-10 01:23:36.684749 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-10 01:23:36.684753 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-10 01:23:36.684756 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-10 01:23:36.684760 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-10 01:23:36.684764 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-10 01:23:36.684768 | orchestrator | 2026-04-10 01:23:36.684772 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-04-10 01:23:36.684789 | orchestrator | Friday 10 April 2026 01:22:41 +0000 (0:00:03.779) 0:03:27.036 ********** 2026-04-10 01:23:36.684794 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-04-10 01:23:36.684798 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j839842431352.3155', 'results_file': '/ansible/.ansible_async/j839842431352.3155', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-10 01:23:36.684802 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j976588894074.3180', 'results_file': '/ansible/.ansible_async/j976588894074.3180', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-10 01:23:36.684806 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j78943002923.3206', 'results_file': '/ansible/.ansible_async/j78943002923.3206', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-10 01:23:36.684810 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j374438729828.3232', 'results_file': '/ansible/.ansible_async/j374438729828.3232', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-10 01:23:36.684822 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j376368823911.3258', 'results_file': '/ansible/.ansible_async/j376368823911.3258', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-10 01:23:36.684827 | orchestrator | 2026-04-10 01:23:36.684830 | orchestrator | TASK [Create test volume] ****************************************************** 2026-04-10 01:23:36.684834 | orchestrator | Friday 10 April 2026 01:22:51 +0000 (0:00:09.445) 0:03:36.481 ********** 2026-04-10 01:23:36.684838 | orchestrator | changed: [localhost] 2026-04-10 01:23:36.684843 | orchestrator | 2026-04-10 01:23:36.684847 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-04-10 01:23:36.684851 | orchestrator | Friday 10 April 2026 01:22:57 +0000 (0:00:06.569) 0:03:43.050 ********** 2026-04-10 01:23:36.684855 | orchestrator | changed: [localhost] 2026-04-10 01:23:36.684859 | orchestrator | 2026-04-10 01:23:36.684862 | orchestrator | TASK [Create floating ip addresses] ******************************************** 2026-04-10 01:23:36.684866 | orchestrator | Friday 10 April 2026 01:23:11 +0000 (0:00:13.785) 0:03:56.836 ********** 2026-04-10 01:23:36.684871 | orchestrator | ok: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-10 01:23:36.684875 | orchestrator | ok: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-10 01:23:36.684879 | orchestrator | ok: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-10 01:23:36.684883 | orchestrator | ok: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-10 01:23:36.684887 | orchestrator | ok: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-10 01:23:36.684890 | orchestrator | 2026-04-10 01:23:36.684894 | orchestrator | TASK [Print floating ip addresses] ********************************************* 2026-04-10 01:23:36.684898 | orchestrator | Friday 10 April 2026 01:23:36 +0000 (0:00:24.601) 0:04:21.438 ********** 2026-04-10 01:23:36.684902 | orchestrator | ok: [localhost] => (item=test) => { 2026-04-10 01:23:36.684906 | orchestrator |  "msg": "test: 192.168.112.108" 2026-04-10 01:23:36.684910 | orchestrator | } 2026-04-10 01:23:36.684914 | orchestrator | ok: [localhost] => (item=test-1) => { 2026-04-10 01:23:36.684918 | orchestrator |  "msg": "test-1: 192.168.112.101" 2026-04-10 01:23:36.684922 | orchestrator | } 2026-04-10 01:23:36.684926 | orchestrator | ok: [localhost] => (item=test-2) => { 2026-04-10 01:23:36.684930 | orchestrator |  "msg": "test-2: 192.168.112.179" 2026-04-10 01:23:36.684934 | orchestrator | } 2026-04-10 01:23:36.684937 | orchestrator | ok: [localhost] => (item=test-3) => { 2026-04-10 01:23:36.684946 | orchestrator |  "msg": "test-3: 192.168.112.164" 2026-04-10 01:23:36.684953 | orchestrator | } 2026-04-10 01:23:36.684962 | orchestrator | ok: [localhost] => (item=test-4) => { 2026-04-10 01:23:36.684968 | orchestrator |  "msg": "test-4: 192.168.112.112" 2026-04-10 01:23:36.684974 | orchestrator | } 2026-04-10 01:23:36.684980 | orchestrator | 2026-04-10 01:23:36.684987 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 01:23:36.684994 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-10 01:23:36.685002 | orchestrator | 2026-04-10 01:23:36.685008 | orchestrator | 2026-04-10 01:23:36.685014 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 01:23:36.685021 | orchestrator | Friday 10 April 2026 01:23:36 +0000 (0:00:00.115) 0:04:21.554 ********** 2026-04-10 01:23:36.685033 | orchestrator | =============================================================================== 2026-04-10 01:23:36.685036 | orchestrator | Wait for instance creation to complete --------------------------------- 57.31s 2026-04-10 01:23:36.685040 | orchestrator | Create test routers ---------------------------------------------------- 33.09s 2026-04-10 01:23:36.685044 | orchestrator | Create floating ip addresses ------------------------------------------- 24.60s 2026-04-10 01:23:36.685048 | orchestrator | Create test subnets ---------------------------------------------------- 16.58s 2026-04-10 01:23:36.685052 | orchestrator | Attach test volume ----------------------------------------------------- 13.79s 2026-04-10 01:23:36.685056 | orchestrator | Create test networks --------------------------------------------------- 13.64s 2026-04-10 01:23:36.685059 | orchestrator | Add member roles to user test ------------------------------------------ 11.53s 2026-04-10 01:23:36.685063 | orchestrator | Wait for tags to be added ----------------------------------------------- 9.45s 2026-04-10 01:23:36.685067 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.39s 2026-04-10 01:23:36.685070 | orchestrator | Create test volume ------------------------------------------------------ 6.57s 2026-04-10 01:23:36.685074 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.36s 2026-04-10 01:23:36.685078 | orchestrator | Create ssh security group ----------------------------------------------- 5.00s 2026-04-10 01:23:36.685082 | orchestrator | Create test server group ------------------------------------------------ 4.82s 2026-04-10 01:23:36.685085 | orchestrator | Add metadata to instances ----------------------------------------------- 4.46s 2026-04-10 01:23:36.685089 | orchestrator | Create test instances --------------------------------------------------- 4.43s 2026-04-10 01:23:36.685093 | orchestrator | Create test-admin user -------------------------------------------------- 4.21s 2026-04-10 01:23:36.685096 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.20s 2026-04-10 01:23:36.685100 | orchestrator | Create test project ----------------------------------------------------- 4.19s 2026-04-10 01:23:36.685104 | orchestrator | Create test keypair ----------------------------------------------------- 4.16s 2026-04-10 01:23:36.685108 | orchestrator | Create test user -------------------------------------------------------- 4.13s 2026-04-10 01:23:36.850257 | orchestrator | + server_list 2026-04-10 01:23:36.850330 | orchestrator | + openstack --os-cloud test server list 2026-04-10 01:23:40.535537 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-10 01:23:40.535630 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-04-10 01:23:40.535641 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-10 01:23:40.535647 | orchestrator | | 2e86ece9-5403-4d48-b0d3-813e4a827038 | test-4 | ACTIVE | test-3=192.168.112.112, 192.168.202.207 | N/A (booted from volume) | SCS-1L-1 | 2026-04-10 01:23:40.535654 | orchestrator | | 50f46a7b-0771-478a-a406-4febd5208272 | test-3 | ACTIVE | test-2=192.168.112.164, 192.168.201.64 | N/A (booted from volume) | SCS-1L-1 | 2026-04-10 01:23:40.535691 | orchestrator | | d0dd826e-8a47-487a-9b70-913ce8a64b03 | test-2 | ACTIVE | test-2=192.168.112.179, 192.168.201.47 | N/A (booted from volume) | SCS-1L-1 | 2026-04-10 01:23:40.535698 | orchestrator | | 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 | test-1 | ACTIVE | test-1=192.168.112.101, 192.168.200.75 | N/A (booted from volume) | SCS-1L-1 | 2026-04-10 01:23:40.535704 | orchestrator | | 8dd2d72c-6249-4335-8abd-e14e6e1198dd | test | ACTIVE | test-1=192.168.112.108, 192.168.200.43 | N/A (booted from volume) | SCS-1L-1 | 2026-04-10 01:23:40.535711 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-10 01:23:40.788127 | orchestrator | + openstack --os-cloud test server show test 2026-04-10 01:23:44.178716 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-10 01:23:44.178830 | orchestrator | | Field | Value | 2026-04-10 01:23:44.178838 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-10 01:23:44.178843 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-10 01:23:44.178848 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-10 01:23:44.178852 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-10 01:23:44.178856 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-04-10 01:23:44.178860 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-10 01:23:44.178877 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-10 01:23:44.178892 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-10 01:23:44.178896 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-10 01:23:44.178900 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-10 01:23:44.178904 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-10 01:23:44.178908 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-10 01:23:44.178912 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-10 01:23:44.178916 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-10 01:23:44.178920 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-10 01:23:44.178932 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-10 01:23:44.178936 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-10T01:22:01.000000 | 2026-04-10 01:23:44.178944 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-10 01:23:44.178948 | orchestrator | | accessIPv4 | | 2026-04-10 01:23:44.178955 | orchestrator | | accessIPv6 | | 2026-04-10 01:23:44.178959 | orchestrator | | addresses | test-1=192.168.112.108, 192.168.200.43 | 2026-04-10 01:23:44.178963 | orchestrator | | config_drive | | 2026-04-10 01:23:44.178967 | orchestrator | | created | 2026-04-10T01:21:32Z | 2026-04-10 01:23:44.178971 | orchestrator | | description | None | 2026-04-10 01:23:44.178978 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-10 01:23:44.178982 | orchestrator | | hostId | e7c6a0af62f3d73fac8b339c49fbf05216c3e9dedda2d9139a56ba70 | 2026-04-10 01:23:44.178986 | orchestrator | | host_status | None | 2026-04-10 01:23:44.178994 | orchestrator | | id | 8dd2d72c-6249-4335-8abd-e14e6e1198dd | 2026-04-10 01:23:44.178998 | orchestrator | | image | N/A (booted from volume) | 2026-04-10 01:23:44.179004 | orchestrator | | key_name | test | 2026-04-10 01:23:44.179009 | orchestrator | | locked | False | 2026-04-10 01:23:44.179013 | orchestrator | | locked_reason | None | 2026-04-10 01:23:44.179017 | orchestrator | | name | test | 2026-04-10 01:23:44.179021 | orchestrator | | pinned_availability_zone | None | 2026-04-10 01:23:44.179028 | orchestrator | | progress | 0 | 2026-04-10 01:23:44.179032 | orchestrator | | project_id | 14ee9551d74f42eaafa8d4b494408036 | 2026-04-10 01:23:44.179036 | orchestrator | | properties | hostname='test' | 2026-04-10 01:23:44.179043 | orchestrator | | security_groups | name='ssh' | 2026-04-10 01:23:44.179050 | orchestrator | | | name='icmp' | 2026-04-10 01:23:44.179054 | orchestrator | | server_groups | None | 2026-04-10 01:23:44.179058 | orchestrator | | status | ACTIVE | 2026-04-10 01:23:44.179062 | orchestrator | | tags | test | 2026-04-10 01:23:44.179066 | orchestrator | | trusted_image_certificates | None | 2026-04-10 01:23:44.179078 | orchestrator | | updated | 2026-04-10T01:22:30Z | 2026-04-10 01:23:44.179082 | orchestrator | | user_id | eb650df5a9b54dd69afcb95d91ea1c8d | 2026-04-10 01:23:44.179086 | orchestrator | | volumes_attached | delete_on_termination='True', id='ce4497cd-6ad2-4af6-b629-71a803ccdd5e' | 2026-04-10 01:23:44.179090 | orchestrator | | | delete_on_termination='False', id='25ee9e7a-d4b6-4353-9037-3ba76bffc87a' | 2026-04-10 01:23:44.181483 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-10 01:23:44.359509 | orchestrator | + openstack --os-cloud test server show test-1 2026-04-10 01:23:47.111690 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-10 01:23:47.111777 | orchestrator | | Field | Value | 2026-04-10 01:23:47.111785 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-10 01:23:47.111789 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-10 01:23:47.111807 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-10 01:23:47.111811 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-10 01:23:47.111815 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-04-10 01:23:47.111820 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-10 01:23:47.111824 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-10 01:23:47.111838 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-10 01:23:47.111846 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-10 01:23:47.111850 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-10 01:23:47.111854 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-10 01:23:47.111861 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-10 01:23:47.111865 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-10 01:23:47.111869 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-10 01:23:47.111873 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-10 01:23:47.111877 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-10 01:23:47.111881 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-10T01:21:59.000000 | 2026-04-10 01:23:47.111889 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-10 01:23:47.111893 | orchestrator | | accessIPv4 | | 2026-04-10 01:23:47.111898 | orchestrator | | accessIPv6 | | 2026-04-10 01:23:47.111905 | orchestrator | | addresses | test-1=192.168.112.101, 192.168.200.75 | 2026-04-10 01:23:47.111913 | orchestrator | | config_drive | | 2026-04-10 01:23:47.111917 | orchestrator | | created | 2026-04-10T01:21:32Z | 2026-04-10 01:23:47.111921 | orchestrator | | description | None | 2026-04-10 01:23:47.111925 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-10 01:23:47.111929 | orchestrator | | hostId | e7c6a0af62f3d73fac8b339c49fbf05216c3e9dedda2d9139a56ba70 | 2026-04-10 01:23:47.111933 | orchestrator | | host_status | None | 2026-04-10 01:23:47.111942 | orchestrator | | id | 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 | 2026-04-10 01:23:47.111948 | orchestrator | | image | N/A (booted from volume) | 2026-04-10 01:23:47.111952 | orchestrator | | key_name | test | 2026-04-10 01:23:47.111959 | orchestrator | | locked | False | 2026-04-10 01:23:47.111963 | orchestrator | | locked_reason | None | 2026-04-10 01:23:47.111967 | orchestrator | | name | test-1 | 2026-04-10 01:23:47.111971 | orchestrator | | pinned_availability_zone | None | 2026-04-10 01:23:47.111975 | orchestrator | | progress | 0 | 2026-04-10 01:23:47.111979 | orchestrator | | project_id | 14ee9551d74f42eaafa8d4b494408036 | 2026-04-10 01:23:47.112015 | orchestrator | | properties | hostname='test-1' | 2026-04-10 01:23:47.112027 | orchestrator | | security_groups | name='ssh' | 2026-04-10 01:23:47.112036 | orchestrator | | | name='icmp' | 2026-04-10 01:23:47.112051 | orchestrator | | server_groups | None | 2026-04-10 01:23:47.112058 | orchestrator | | status | ACTIVE | 2026-04-10 01:23:47.112064 | orchestrator | | tags | test | 2026-04-10 01:23:47.112070 | orchestrator | | trusted_image_certificates | None | 2026-04-10 01:23:47.112076 | orchestrator | | updated | 2026-04-10T01:22:30Z | 2026-04-10 01:23:47.112082 | orchestrator | | user_id | eb650df5a9b54dd69afcb95d91ea1c8d | 2026-04-10 01:23:47.112089 | orchestrator | | volumes_attached | delete_on_termination='True', id='59338190-6490-4919-a1f7-2d1900eea409' | 2026-04-10 01:23:47.114008 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-10 01:23:47.262138 | orchestrator | + openstack --os-cloud test server show test-2 2026-04-10 01:23:50.407765 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-10 01:23:50.407884 | orchestrator | | Field | Value | 2026-04-10 01:23:50.407892 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-10 01:23:50.407897 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-10 01:23:50.407901 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-10 01:23:50.407905 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-10 01:23:50.407909 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-04-10 01:23:50.407913 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-10 01:23:50.407917 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-10 01:23:50.407932 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-10 01:23:50.407941 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-10 01:23:50.407948 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-10 01:23:50.407952 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-10 01:23:50.407956 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-10 01:23:50.407960 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-10 01:23:50.407966 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-10 01:23:50.407972 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-10 01:23:50.407982 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-10 01:23:50.407990 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-10T01:22:02.000000 | 2026-04-10 01:23:50.408006 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-10 01:23:50.408016 | orchestrator | | accessIPv4 | | 2026-04-10 01:23:50.408023 | orchestrator | | accessIPv6 | | 2026-04-10 01:23:50.408029 | orchestrator | | addresses | test-2=192.168.112.179, 192.168.201.47 | 2026-04-10 01:23:50.408035 | orchestrator | | config_drive | | 2026-04-10 01:23:50.408042 | orchestrator | | created | 2026-04-10T01:21:33Z | 2026-04-10 01:23:50.408057 | orchestrator | | description | None | 2026-04-10 01:23:50.408070 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-10 01:23:50.408077 | orchestrator | | hostId | 3df0d6fde47215b706cb57d16177524f22b6d860ce8f095ec9f7b78a | 2026-04-10 01:23:50.408083 | orchestrator | | host_status | None | 2026-04-10 01:23:50.408101 | orchestrator | | id | d0dd826e-8a47-487a-9b70-913ce8a64b03 | 2026-04-10 01:23:50.408106 | orchestrator | | image | N/A (booted from volume) | 2026-04-10 01:23:50.408110 | orchestrator | | key_name | test | 2026-04-10 01:23:50.408114 | orchestrator | | locked | False | 2026-04-10 01:23:50.408118 | orchestrator | | locked_reason | None | 2026-04-10 01:23:50.408122 | orchestrator | | name | test-2 | 2026-04-10 01:23:50.408126 | orchestrator | | pinned_availability_zone | None | 2026-04-10 01:23:50.408130 | orchestrator | | progress | 0 | 2026-04-10 01:23:50.408139 | orchestrator | | project_id | 14ee9551d74f42eaafa8d4b494408036 | 2026-04-10 01:23:50.408147 | orchestrator | | properties | hostname='test-2' | 2026-04-10 01:23:50.408218 | orchestrator | | security_groups | name='ssh' | 2026-04-10 01:23:50.408226 | orchestrator | | | name='icmp' | 2026-04-10 01:23:50.408230 | orchestrator | | server_groups | None | 2026-04-10 01:23:50.408234 | orchestrator | | status | ACTIVE | 2026-04-10 01:23:50.408238 | orchestrator | | tags | test | 2026-04-10 01:23:50.408242 | orchestrator | | trusted_image_certificates | None | 2026-04-10 01:23:50.408246 | orchestrator | | updated | 2026-04-10T01:22:31Z | 2026-04-10 01:23:50.408250 | orchestrator | | user_id | eb650df5a9b54dd69afcb95d91ea1c8d | 2026-04-10 01:23:50.408257 | orchestrator | | volumes_attached | delete_on_termination='True', id='d189f08f-7d2c-4389-928e-1d2afd9f0594' | 2026-04-10 01:23:50.412388 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-10 01:23:50.685690 | orchestrator | + openstack --os-cloud test server show test-3 2026-04-10 01:23:53.466379 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-10 01:23:53.466458 | orchestrator | | Field | Value | 2026-04-10 01:23:53.466469 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-10 01:23:53.466478 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-10 01:23:53.466486 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-10 01:23:53.466493 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-10 01:23:53.466501 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-04-10 01:23:53.466523 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-10 01:23:53.466531 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-10 01:23:53.466550 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-10 01:23:53.466559 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-10 01:23:53.466596 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-10 01:23:53.466606 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-10 01:23:53.466614 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-10 01:23:53.466621 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-10 01:23:53.466629 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-10 01:23:53.466637 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-10 01:23:53.466649 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-10 01:23:53.466657 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-10T01:22:01.000000 | 2026-04-10 01:23:53.466670 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-10 01:23:53.466678 | orchestrator | | accessIPv4 | | 2026-04-10 01:23:53.466689 | orchestrator | | accessIPv6 | | 2026-04-10 01:23:53.466697 | orchestrator | | addresses | test-2=192.168.112.164, 192.168.201.64 | 2026-04-10 01:23:53.466705 | orchestrator | | config_drive | | 2026-04-10 01:23:53.466713 | orchestrator | | created | 2026-04-10T01:21:34Z | 2026-04-10 01:23:53.466720 | orchestrator | | description | None | 2026-04-10 01:23:53.466732 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-10 01:23:53.466740 | orchestrator | | hostId | 3df0d6fde47215b706cb57d16177524f22b6d860ce8f095ec9f7b78a | 2026-04-10 01:23:53.466748 | orchestrator | | host_status | None | 2026-04-10 01:23:53.466760 | orchestrator | | id | 50f46a7b-0771-478a-a406-4febd5208272 | 2026-04-10 01:23:53.466768 | orchestrator | | image | N/A (booted from volume) | 2026-04-10 01:23:53.466779 | orchestrator | | key_name | test | 2026-04-10 01:23:53.466786 | orchestrator | | locked | False | 2026-04-10 01:23:53.466794 | orchestrator | | locked_reason | None | 2026-04-10 01:23:53.466802 | orchestrator | | name | test-3 | 2026-04-10 01:23:53.466814 | orchestrator | | pinned_availability_zone | None | 2026-04-10 01:23:53.466821 | orchestrator | | progress | 0 | 2026-04-10 01:23:53.466829 | orchestrator | | project_id | 14ee9551d74f42eaafa8d4b494408036 | 2026-04-10 01:23:53.466836 | orchestrator | | properties | hostname='test-3' | 2026-04-10 01:23:53.466848 | orchestrator | | security_groups | name='ssh' | 2026-04-10 01:23:53.466856 | orchestrator | | | name='icmp' | 2026-04-10 01:23:53.466867 | orchestrator | | server_groups | None | 2026-04-10 01:23:53.466874 | orchestrator | | status | ACTIVE | 2026-04-10 01:23:53.466882 | orchestrator | | tags | test | 2026-04-10 01:23:53.466893 | orchestrator | | trusted_image_certificates | None | 2026-04-10 01:23:53.466901 | orchestrator | | updated | 2026-04-10T01:22:32Z | 2026-04-10 01:23:53.466909 | orchestrator | | user_id | eb650df5a9b54dd69afcb95d91ea1c8d | 2026-04-10 01:23:53.466916 | orchestrator | | volumes_attached | delete_on_termination='True', id='34601242-775e-42db-99dd-4c72372b04de' | 2026-04-10 01:23:53.470385 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-10 01:23:53.722917 | orchestrator | + openstack --os-cloud test server show test-4 2026-04-10 01:23:56.650694 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-10 01:23:56.650773 | orchestrator | | Field | Value | 2026-04-10 01:23:56.650780 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-10 01:23:56.650786 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-10 01:23:56.650806 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-10 01:23:56.650811 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-10 01:23:56.650815 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-04-10 01:23:56.650819 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-10 01:23:56.650823 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-10 01:23:56.650838 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-10 01:23:56.651286 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-10 01:23:56.651304 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-10 01:23:56.651309 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-10 01:23:56.651313 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-10 01:23:56.651324 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-10 01:23:56.651328 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-10 01:23:56.651333 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-10 01:23:56.651337 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-10 01:23:56.651344 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-10T01:21:59.000000 | 2026-04-10 01:23:56.651355 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-10 01:23:56.651359 | orchestrator | | accessIPv4 | | 2026-04-10 01:23:56.651363 | orchestrator | | accessIPv6 | | 2026-04-10 01:23:56.651367 | orchestrator | | addresses | test-3=192.168.112.112, 192.168.202.207 | 2026-04-10 01:23:56.651375 | orchestrator | | config_drive | | 2026-04-10 01:23:56.651379 | orchestrator | | created | 2026-04-10T01:21:35Z | 2026-04-10 01:23:56.651383 | orchestrator | | description | None | 2026-04-10 01:23:56.651387 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-10 01:23:56.651391 | orchestrator | | hostId | 8fc370693980a5f5cda3789ea2558bd6082114a4186fc75bd16861b5 | 2026-04-10 01:23:56.651398 | orchestrator | | host_status | None | 2026-04-10 01:23:56.651406 | orchestrator | | id | 2e86ece9-5403-4d48-b0d3-813e4a827038 | 2026-04-10 01:23:56.651411 | orchestrator | | image | N/A (booted from volume) | 2026-04-10 01:23:56.651415 | orchestrator | | key_name | test | 2026-04-10 01:23:56.651434 | orchestrator | | locked | False | 2026-04-10 01:23:56.651441 | orchestrator | | locked_reason | None | 2026-04-10 01:23:56.651447 | orchestrator | | name | test-4 | 2026-04-10 01:23:56.651453 | orchestrator | | pinned_availability_zone | None | 2026-04-10 01:23:56.651459 | orchestrator | | progress | 0 | 2026-04-10 01:23:56.651465 | orchestrator | | project_id | 14ee9551d74f42eaafa8d4b494408036 | 2026-04-10 01:23:56.651475 | orchestrator | | properties | hostname='test-4' | 2026-04-10 01:23:56.651487 | orchestrator | | security_groups | name='ssh' | 2026-04-10 01:23:56.651494 | orchestrator | | | name='icmp' | 2026-04-10 01:23:56.651507 | orchestrator | | server_groups | None | 2026-04-10 01:23:56.651514 | orchestrator | | status | ACTIVE | 2026-04-10 01:23:56.651521 | orchestrator | | tags | test | 2026-04-10 01:23:56.651527 | orchestrator | | trusted_image_certificates | None | 2026-04-10 01:23:56.651533 | orchestrator | | updated | 2026-04-10T01:22:32Z | 2026-04-10 01:23:56.651540 | orchestrator | | user_id | eb650df5a9b54dd69afcb95d91ea1c8d | 2026-04-10 01:23:56.651546 | orchestrator | | volumes_attached | delete_on_termination='True', id='1dc1ef57-ecb0-4c86-ad43-6b67c109ed40' | 2026-04-10 01:23:56.654940 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-10 01:23:56.888074 | orchestrator | + server_ping 2026-04-10 01:23:56.889129 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-10 01:23:56.889855 | orchestrator | ++ tr -d '\r' 2026-04-10 01:23:59.649703 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-10 01:23:59.649784 | orchestrator | + ping -c3 192.168.112.101 2026-04-10 01:23:59.666326 | orchestrator | PING 192.168.112.101 (192.168.112.101) 56(84) bytes of data. 2026-04-10 01:23:59.666438 | orchestrator | 64 bytes from 192.168.112.101: icmp_seq=1 ttl=63 time=10.3 ms 2026-04-10 01:24:00.660024 | orchestrator | 64 bytes from 192.168.112.101: icmp_seq=2 ttl=63 time=2.39 ms 2026-04-10 01:24:01.661632 | orchestrator | 64 bytes from 192.168.112.101: icmp_seq=3 ttl=63 time=1.56 ms 2026-04-10 01:24:01.661725 | orchestrator | 2026-04-10 01:24:01.661748 | orchestrator | --- 192.168.112.101 ping statistics --- 2026-04-10 01:24:01.661764 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-10 01:24:01.661778 | orchestrator | rtt min/avg/max/mdev = 1.559/4.751/10.307/3.942 ms 2026-04-10 01:24:01.662274 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-10 01:24:01.662307 | orchestrator | + ping -c3 192.168.112.164 2026-04-10 01:24:01.674671 | orchestrator | PING 192.168.112.164 (192.168.112.164) 56(84) bytes of data. 2026-04-10 01:24:01.674733 | orchestrator | 64 bytes from 192.168.112.164: icmp_seq=1 ttl=63 time=8.15 ms 2026-04-10 01:24:02.670086 | orchestrator | 64 bytes from 192.168.112.164: icmp_seq=2 ttl=63 time=2.10 ms 2026-04-10 01:24:03.670925 | orchestrator | 64 bytes from 192.168.112.164: icmp_seq=3 ttl=63 time=1.56 ms 2026-04-10 01:24:03.671315 | orchestrator | 2026-04-10 01:24:03.671341 | orchestrator | --- 192.168.112.164 ping statistics --- 2026-04-10 01:24:03.671350 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-10 01:24:03.671356 | orchestrator | rtt min/avg/max/mdev = 1.561/3.937/8.147/2.984 ms 2026-04-10 01:24:03.672097 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-10 01:24:03.672106 | orchestrator | + ping -c3 192.168.112.108 2026-04-10 01:24:03.683017 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2026-04-10 01:24:03.683094 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=6.76 ms 2026-04-10 01:24:04.679775 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=2.25 ms 2026-04-10 01:24:05.680251 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=1.78 ms 2026-04-10 01:24:05.680582 | orchestrator | 2026-04-10 01:24:05.680609 | orchestrator | --- 192.168.112.108 ping statistics --- 2026-04-10 01:24:05.680619 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-10 01:24:05.680626 | orchestrator | rtt min/avg/max/mdev = 1.775/3.595/6.758/2.245 ms 2026-04-10 01:24:05.680816 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-10 01:24:05.680887 | orchestrator | + ping -c3 192.168.112.179 2026-04-10 01:24:05.693523 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2026-04-10 01:24:05.693625 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=8.78 ms 2026-04-10 01:24:06.688908 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=2.19 ms 2026-04-10 01:24:07.690735 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=1.90 ms 2026-04-10 01:24:07.690837 | orchestrator | 2026-04-10 01:24:07.690847 | orchestrator | --- 192.168.112.179 ping statistics --- 2026-04-10 01:24:07.690945 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-10 01:24:07.690957 | orchestrator | rtt min/avg/max/mdev = 1.895/4.287/8.780/3.179 ms 2026-04-10 01:24:07.690975 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-10 01:24:07.690983 | orchestrator | + ping -c3 192.168.112.112 2026-04-10 01:24:07.700834 | orchestrator | PING 192.168.112.112 (192.168.112.112) 56(84) bytes of data. 2026-04-10 01:24:07.700929 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=1 ttl=63 time=5.70 ms 2026-04-10 01:24:08.699506 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=2 ttl=63 time=2.16 ms 2026-04-10 01:24:09.700726 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=3 ttl=63 time=1.72 ms 2026-04-10 01:24:09.701266 | orchestrator | 2026-04-10 01:24:09.701329 | orchestrator | --- 192.168.112.112 ping statistics --- 2026-04-10 01:24:09.701342 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-10 01:24:09.701350 | orchestrator | rtt min/avg/max/mdev = 1.721/3.194/5.700/1.780 ms 2026-04-10 01:24:09.701481 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-10 01:24:09.701492 | orchestrator | + compute_list 2026-04-10 01:24:09.701499 | orchestrator | + osism manage compute list testbed-node-3 2026-04-10 01:24:11.316028 | orchestrator | 2026-04-10 01:24:11 | ERROR  | Unable to get ansible vault password 2026-04-10 01:24:11.316100 | orchestrator | 2026-04-10 01:24:11 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-10 01:24:11.316108 | orchestrator | 2026-04-10 01:24:11 | ERROR  | Dropping encrypted entries 2026-04-10 01:24:14.683492 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-10 01:24:14.683587 | orchestrator | | ID | Name | Status | 2026-04-10 01:24:14.683597 | orchestrator | |--------------------------------------+--------+----------| 2026-04-10 01:24:14.683604 | orchestrator | | 50f46a7b-0771-478a-a406-4febd5208272 | test-3 | ACTIVE | 2026-04-10 01:24:14.683611 | orchestrator | | d0dd826e-8a47-487a-9b70-913ce8a64b03 | test-2 | ACTIVE | 2026-04-10 01:24:14.683617 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-10 01:24:15.045878 | orchestrator | + osism manage compute list testbed-node-4 2026-04-10 01:24:16.632536 | orchestrator | 2026-04-10 01:24:16 | ERROR  | Unable to get ansible vault password 2026-04-10 01:24:16.632624 | orchestrator | 2026-04-10 01:24:16 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-10 01:24:16.632632 | orchestrator | 2026-04-10 01:24:16 | ERROR  | Dropping encrypted entries 2026-04-10 01:24:18.561019 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-10 01:24:18.561117 | orchestrator | | ID | Name | Status | 2026-04-10 01:24:18.561125 | orchestrator | |--------------------------------------+--------+----------| 2026-04-10 01:24:18.561183 | orchestrator | | 2e86ece9-5403-4d48-b0d3-813e4a827038 | test-4 | ACTIVE | 2026-04-10 01:24:18.561204 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-10 01:24:18.935625 | orchestrator | + osism manage compute list testbed-node-5 2026-04-10 01:24:20.520882 | orchestrator | 2026-04-10 01:24:20 | ERROR  | Unable to get ansible vault password 2026-04-10 01:24:20.520942 | orchestrator | 2026-04-10 01:24:20 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-10 01:24:20.520953 | orchestrator | 2026-04-10 01:24:20 | ERROR  | Dropping encrypted entries 2026-04-10 01:24:22.403636 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-10 01:24:22.403710 | orchestrator | | ID | Name | Status | 2026-04-10 01:24:22.403716 | orchestrator | |--------------------------------------+--------+----------| 2026-04-10 01:24:22.403721 | orchestrator | | 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 | test-1 | ACTIVE | 2026-04-10 01:24:22.403726 | orchestrator | | 8dd2d72c-6249-4335-8abd-e14e6e1198dd | test | ACTIVE | 2026-04-10 01:24:22.403730 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-10 01:24:22.777694 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2026-04-10 01:24:24.324841 | orchestrator | 2026-04-10 01:24:24 | ERROR  | Unable to get ansible vault password 2026-04-10 01:24:24.324933 | orchestrator | 2026-04-10 01:24:24 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-10 01:24:24.324945 | orchestrator | 2026-04-10 01:24:24 | ERROR  | Dropping encrypted entries 2026-04-10 01:24:25.716865 | orchestrator | 2026-04-10 01:24:25 | INFO  | Live migrating server 2e86ece9-5403-4d48-b0d3-813e4a827038 2026-04-10 01:24:39.602276 | orchestrator | 2026-04-10 01:24:39 | INFO  | Live migration of 2e86ece9-5403-4d48-b0d3-813e4a827038 (test-4) is still in progress 2026-04-10 01:24:42.244207 | orchestrator | 2026-04-10 01:24:42 | INFO  | Live migration of 2e86ece9-5403-4d48-b0d3-813e4a827038 (test-4) is still in progress 2026-04-10 01:24:44.664446 | orchestrator | 2026-04-10 01:24:44 | INFO  | Live migration of 2e86ece9-5403-4d48-b0d3-813e4a827038 (test-4) is still in progress 2026-04-10 01:24:46.951023 | orchestrator | 2026-04-10 01:24:46 | INFO  | Live migration of 2e86ece9-5403-4d48-b0d3-813e4a827038 (test-4) is still in progress 2026-04-10 01:24:49.223007 | orchestrator | 2026-04-10 01:24:49 | INFO  | Live migration of 2e86ece9-5403-4d48-b0d3-813e4a827038 (test-4) is still in progress 2026-04-10 01:24:51.544099 | orchestrator | 2026-04-10 01:24:51 | INFO  | Live migration of 2e86ece9-5403-4d48-b0d3-813e4a827038 (test-4) is still in progress 2026-04-10 01:24:53.749099 | orchestrator | 2026-04-10 01:24:53 | INFO  | Live migration of 2e86ece9-5403-4d48-b0d3-813e4a827038 (test-4) is still in progress 2026-04-10 01:24:56.154550 | orchestrator | 2026-04-10 01:24:56 | INFO  | Live migration of 2e86ece9-5403-4d48-b0d3-813e4a827038 (test-4) is still in progress 2026-04-10 01:24:58.465677 | orchestrator | 2026-04-10 01:24:58 | INFO  | Live migration of 2e86ece9-5403-4d48-b0d3-813e4a827038 (test-4) completed with status ACTIVE 2026-04-10 01:24:58.749098 | orchestrator | + compute_list 2026-04-10 01:24:58.749182 | orchestrator | + osism manage compute list testbed-node-3 2026-04-10 01:25:00.383534 | orchestrator | 2026-04-10 01:25:00 | ERROR  | Unable to get ansible vault password 2026-04-10 01:25:00.383623 | orchestrator | 2026-04-10 01:25:00 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-10 01:25:00.383634 | orchestrator | 2026-04-10 01:25:00 | ERROR  | Dropping encrypted entries 2026-04-10 01:25:01.965142 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-10 01:25:01.965238 | orchestrator | | ID | Name | Status | 2026-04-10 01:25:01.965250 | orchestrator | |--------------------------------------+--------+----------| 2026-04-10 01:25:01.965256 | orchestrator | | 2e86ece9-5403-4d48-b0d3-813e4a827038 | test-4 | ACTIVE | 2026-04-10 01:25:01.965262 | orchestrator | | 50f46a7b-0771-478a-a406-4febd5208272 | test-3 | ACTIVE | 2026-04-10 01:25:01.965269 | orchestrator | | d0dd826e-8a47-487a-9b70-913ce8a64b03 | test-2 | ACTIVE | 2026-04-10 01:25:01.965275 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-10 01:25:02.273024 | orchestrator | + osism manage compute list testbed-node-4 2026-04-10 01:25:03.859214 | orchestrator | 2026-04-10 01:25:03 | ERROR  | Unable to get ansible vault password 2026-04-10 01:25:03.859302 | orchestrator | 2026-04-10 01:25:03 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-10 01:25:03.859313 | orchestrator | 2026-04-10 01:25:03 | ERROR  | Dropping encrypted entries 2026-04-10 01:25:04.897348 | orchestrator | +------+--------+----------+ 2026-04-10 01:25:04.897446 | orchestrator | | ID | Name | Status | 2026-04-10 01:25:04.897457 | orchestrator | |------+--------+----------| 2026-04-10 01:25:04.897463 | orchestrator | +------+--------+----------+ 2026-04-10 01:25:05.207518 | orchestrator | + osism manage compute list testbed-node-5 2026-04-10 01:25:06.827909 | orchestrator | 2026-04-10 01:25:06 | ERROR  | Unable to get ansible vault password 2026-04-10 01:25:06.827966 | orchestrator | 2026-04-10 01:25:06 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-10 01:25:06.827976 | orchestrator | 2026-04-10 01:25:06 | ERROR  | Dropping encrypted entries 2026-04-10 01:25:08.261023 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-10 01:25:08.261078 | orchestrator | | ID | Name | Status | 2026-04-10 01:25:08.261087 | orchestrator | |--------------------------------------+--------+----------| 2026-04-10 01:25:08.261131 | orchestrator | | 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 | test-1 | ACTIVE | 2026-04-10 01:25:08.261140 | orchestrator | | 8dd2d72c-6249-4335-8abd-e14e6e1198dd | test | ACTIVE | 2026-04-10 01:25:08.261164 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-10 01:25:08.549827 | orchestrator | + server_ping 2026-04-10 01:25:08.550451 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-10 01:25:08.550542 | orchestrator | ++ tr -d '\r' 2026-04-10 01:25:11.187642 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-10 01:25:11.187882 | orchestrator | + ping -c3 192.168.112.101 2026-04-10 01:25:11.195344 | orchestrator | PING 192.168.112.101 (192.168.112.101) 56(84) bytes of data. 2026-04-10 01:25:11.195391 | orchestrator | 64 bytes from 192.168.112.101: icmp_seq=1 ttl=63 time=5.45 ms 2026-04-10 01:25:12.193428 | orchestrator | 64 bytes from 192.168.112.101: icmp_seq=2 ttl=63 time=1.71 ms 2026-04-10 01:25:13.195545 | orchestrator | 64 bytes from 192.168.112.101: icmp_seq=3 ttl=63 time=1.34 ms 2026-04-10 01:25:13.195603 | orchestrator | 2026-04-10 01:25:13.195611 | orchestrator | --- 192.168.112.101 ping statistics --- 2026-04-10 01:25:13.195618 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-10 01:25:13.195624 | orchestrator | rtt min/avg/max/mdev = 1.342/2.835/5.449/1.854 ms 2026-04-10 01:25:13.195630 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-10 01:25:13.195636 | orchestrator | + ping -c3 192.168.112.164 2026-04-10 01:25:13.203744 | orchestrator | PING 192.168.112.164 (192.168.112.164) 56(84) bytes of data. 2026-04-10 01:25:13.203801 | orchestrator | 64 bytes from 192.168.112.164: icmp_seq=1 ttl=63 time=5.56 ms 2026-04-10 01:25:14.201703 | orchestrator | 64 bytes from 192.168.112.164: icmp_seq=2 ttl=63 time=1.59 ms 2026-04-10 01:25:15.202062 | orchestrator | 64 bytes from 192.168.112.164: icmp_seq=3 ttl=63 time=1.41 ms 2026-04-10 01:25:15.202137 | orchestrator | 2026-04-10 01:25:15.202149 | orchestrator | --- 192.168.112.164 ping statistics --- 2026-04-10 01:25:15.202158 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-10 01:25:15.202166 | orchestrator | rtt min/avg/max/mdev = 1.407/2.853/5.561/1.916 ms 2026-04-10 01:25:15.202519 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-10 01:25:15.202553 | orchestrator | + ping -c3 192.168.112.108 2026-04-10 01:25:15.211067 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2026-04-10 01:25:15.211137 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=6.34 ms 2026-04-10 01:25:16.208726 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=2.42 ms 2026-04-10 01:25:17.210269 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=1.85 ms 2026-04-10 01:25:17.210368 | orchestrator | 2026-04-10 01:25:17.210381 | orchestrator | --- 192.168.112.108 ping statistics --- 2026-04-10 01:25:17.210391 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-10 01:25:17.210398 | orchestrator | rtt min/avg/max/mdev = 1.853/3.536/6.340/1.995 ms 2026-04-10 01:25:17.210744 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-10 01:25:17.210765 | orchestrator | + ping -c3 192.168.112.179 2026-04-10 01:25:17.221382 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2026-04-10 01:25:17.221473 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=6.35 ms 2026-04-10 01:25:18.219200 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=2.59 ms 2026-04-10 01:25:19.220412 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=1.49 ms 2026-04-10 01:25:19.221279 | orchestrator | 2026-04-10 01:25:19.221327 | orchestrator | --- 192.168.112.179 ping statistics --- 2026-04-10 01:25:19.221334 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-10 01:25:19.221340 | orchestrator | rtt min/avg/max/mdev = 1.488/3.473/6.347/2.080 ms 2026-04-10 01:25:19.221359 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-10 01:25:19.221364 | orchestrator | + ping -c3 192.168.112.112 2026-04-10 01:25:19.230988 | orchestrator | PING 192.168.112.112 (192.168.112.112) 56(84) bytes of data. 2026-04-10 01:25:19.231061 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=1 ttl=63 time=5.96 ms 2026-04-10 01:25:20.228802 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=2 ttl=63 time=2.29 ms 2026-04-10 01:25:21.229992 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=3 ttl=63 time=1.60 ms 2026-04-10 01:25:21.230140 | orchestrator | 2026-04-10 01:25:21.230155 | orchestrator | --- 192.168.112.112 ping statistics --- 2026-04-10 01:25:21.230164 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-10 01:25:21.230190 | orchestrator | rtt min/avg/max/mdev = 1.595/3.281/5.961/1.915 ms 2026-04-10 01:25:21.230197 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2026-04-10 01:25:22.796473 | orchestrator | 2026-04-10 01:25:22 | ERROR  | Unable to get ansible vault password 2026-04-10 01:25:22.796550 | orchestrator | 2026-04-10 01:25:22 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-10 01:25:22.796559 | orchestrator | 2026-04-10 01:25:22 | ERROR  | Dropping encrypted entries 2026-04-10 01:25:24.268788 | orchestrator | 2026-04-10 01:25:24 | INFO  | Live migrating server 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 2026-04-10 01:25:35.122721 | orchestrator | 2026-04-10 01:25:35 | INFO  | Live migration of 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 (test-1) is still in progress 2026-04-10 01:25:37.492545 | orchestrator | 2026-04-10 01:25:37 | INFO  | Live migration of 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 (test-1) is still in progress 2026-04-10 01:25:39.760100 | orchestrator | 2026-04-10 01:25:39 | INFO  | Live migration of 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 (test-1) is still in progress 2026-04-10 01:25:42.100292 | orchestrator | 2026-04-10 01:25:42 | INFO  | Live migration of 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 (test-1) is still in progress 2026-04-10 01:25:44.435625 | orchestrator | 2026-04-10 01:25:44 | INFO  | Live migration of 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 (test-1) is still in progress 2026-04-10 01:25:46.642526 | orchestrator | 2026-04-10 01:25:46 | INFO  | Live migration of 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 (test-1) is still in progress 2026-04-10 01:25:48.873881 | orchestrator | 2026-04-10 01:25:48 | INFO  | Live migration of 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 (test-1) is still in progress 2026-04-10 01:25:51.096889 | orchestrator | 2026-04-10 01:25:51 | INFO  | Live migration of 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 (test-1) is still in progress 2026-04-10 01:25:53.407487 | orchestrator | 2026-04-10 01:25:53 | INFO  | Live migration of 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 (test-1) completed with status ACTIVE 2026-04-10 01:25:53.407617 | orchestrator | 2026-04-10 01:25:53 | INFO  | Live migrating server 8dd2d72c-6249-4335-8abd-e14e6e1198dd 2026-04-10 01:26:04.238432 | orchestrator | 2026-04-10 01:26:04 | INFO  | Live migration of 8dd2d72c-6249-4335-8abd-e14e6e1198dd (test) is still in progress 2026-04-10 01:26:06.586837 | orchestrator | 2026-04-10 01:26:06 | INFO  | Live migration of 8dd2d72c-6249-4335-8abd-e14e6e1198dd (test) is still in progress 2026-04-10 01:26:08.873350 | orchestrator | 2026-04-10 01:26:08 | INFO  | Live migration of 8dd2d72c-6249-4335-8abd-e14e6e1198dd (test) is still in progress 2026-04-10 01:26:11.223754 | orchestrator | 2026-04-10 01:26:11 | INFO  | Live migration of 8dd2d72c-6249-4335-8abd-e14e6e1198dd (test) is still in progress 2026-04-10 01:26:13.548788 | orchestrator | 2026-04-10 01:26:13 | INFO  | Live migration of 8dd2d72c-6249-4335-8abd-e14e6e1198dd (test) is still in progress 2026-04-10 01:26:15.816622 | orchestrator | 2026-04-10 01:26:15 | INFO  | Live migration of 8dd2d72c-6249-4335-8abd-e14e6e1198dd (test) is still in progress 2026-04-10 01:26:18.121713 | orchestrator | 2026-04-10 01:26:18 | INFO  | Live migration of 8dd2d72c-6249-4335-8abd-e14e6e1198dd (test) is still in progress 2026-04-10 01:26:20.311123 | orchestrator | 2026-04-10 01:26:20 | INFO  | Live migration of 8dd2d72c-6249-4335-8abd-e14e6e1198dd (test) is still in progress 2026-04-10 01:26:22.552586 | orchestrator | 2026-04-10 01:26:22 | INFO  | Live migration of 8dd2d72c-6249-4335-8abd-e14e6e1198dd (test) is still in progress 2026-04-10 01:26:24.778601 | orchestrator | 2026-04-10 01:26:24 | INFO  | Live migration of 8dd2d72c-6249-4335-8abd-e14e6e1198dd (test) is still in progress 2026-04-10 01:26:27.184162 | orchestrator | 2026-04-10 01:26:27 | INFO  | Live migration of 8dd2d72c-6249-4335-8abd-e14e6e1198dd (test) completed with status ACTIVE 2026-04-10 01:26:27.466377 | orchestrator | + compute_list 2026-04-10 01:26:27.466470 | orchestrator | + osism manage compute list testbed-node-3 2026-04-10 01:26:29.043192 | orchestrator | 2026-04-10 01:26:29 | ERROR  | Unable to get ansible vault password 2026-04-10 01:26:29.043284 | orchestrator | 2026-04-10 01:26:29 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-10 01:26:29.043296 | orchestrator | 2026-04-10 01:26:29 | ERROR  | Dropping encrypted entries 2026-04-10 01:26:30.954821 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-10 01:26:30.954882 | orchestrator | | ID | Name | Status | 2026-04-10 01:26:30.954890 | orchestrator | |--------------------------------------+--------+----------| 2026-04-10 01:26:30.954896 | orchestrator | | 2e86ece9-5403-4d48-b0d3-813e4a827038 | test-4 | ACTIVE | 2026-04-10 01:26:30.954902 | orchestrator | | 50f46a7b-0771-478a-a406-4febd5208272 | test-3 | ACTIVE | 2026-04-10 01:26:30.954908 | orchestrator | | d0dd826e-8a47-487a-9b70-913ce8a64b03 | test-2 | ACTIVE | 2026-04-10 01:26:30.954914 | orchestrator | | 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 | test-1 | ACTIVE | 2026-04-10 01:26:30.954920 | orchestrator | | 8dd2d72c-6249-4335-8abd-e14e6e1198dd | test | ACTIVE | 2026-04-10 01:26:30.954926 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-10 01:26:31.254320 | orchestrator | + osism manage compute list testbed-node-4 2026-04-10 01:26:32.846806 | orchestrator | 2026-04-10 01:26:32 | ERROR  | Unable to get ansible vault password 2026-04-10 01:26:32.846880 | orchestrator | 2026-04-10 01:26:32 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-10 01:26:32.846888 | orchestrator | 2026-04-10 01:26:32 | ERROR  | Dropping encrypted entries 2026-04-10 01:26:34.028652 | orchestrator | +------+--------+----------+ 2026-04-10 01:26:34.028735 | orchestrator | | ID | Name | Status | 2026-04-10 01:26:34.028741 | orchestrator | |------+--------+----------| 2026-04-10 01:26:34.028841 | orchestrator | +------+--------+----------+ 2026-04-10 01:26:34.325002 | orchestrator | + osism manage compute list testbed-node-5 2026-04-10 01:26:35.912951 | orchestrator | 2026-04-10 01:26:35 | ERROR  | Unable to get ansible vault password 2026-04-10 01:26:35.913022 | orchestrator | 2026-04-10 01:26:35 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-10 01:26:35.913066 | orchestrator | 2026-04-10 01:26:35 | ERROR  | Dropping encrypted entries 2026-04-10 01:26:36.973211 | orchestrator | +------+--------+----------+ 2026-04-10 01:26:36.973313 | orchestrator | | ID | Name | Status | 2026-04-10 01:26:36.973323 | orchestrator | |------+--------+----------| 2026-04-10 01:26:36.973331 | orchestrator | +------+--------+----------+ 2026-04-10 01:26:37.249330 | orchestrator | + server_ping 2026-04-10 01:26:37.250341 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-10 01:26:37.250661 | orchestrator | ++ tr -d '\r' 2026-04-10 01:26:39.959439 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-10 01:26:39.959516 | orchestrator | + ping -c3 192.168.112.101 2026-04-10 01:26:39.969765 | orchestrator | PING 192.168.112.101 (192.168.112.101) 56(84) bytes of data. 2026-04-10 01:26:39.969860 | orchestrator | 64 bytes from 192.168.112.101: icmp_seq=1 ttl=63 time=8.65 ms 2026-04-10 01:26:40.965416 | orchestrator | 64 bytes from 192.168.112.101: icmp_seq=2 ttl=63 time=2.15 ms 2026-04-10 01:26:41.966345 | orchestrator | 64 bytes from 192.168.112.101: icmp_seq=3 ttl=63 time=1.78 ms 2026-04-10 01:26:41.966463 | orchestrator | 2026-04-10 01:26:41.966476 | orchestrator | --- 192.168.112.101 ping statistics --- 2026-04-10 01:26:41.966482 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-10 01:26:41.966487 | orchestrator | rtt min/avg/max/mdev = 1.783/4.193/8.649/3.153 ms 2026-04-10 01:26:41.967050 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-10 01:26:41.967105 | orchestrator | + ping -c3 192.168.112.164 2026-04-10 01:26:41.979241 | orchestrator | PING 192.168.112.164 (192.168.112.164) 56(84) bytes of data. 2026-04-10 01:26:41.979330 | orchestrator | 64 bytes from 192.168.112.164: icmp_seq=1 ttl=63 time=8.02 ms 2026-04-10 01:26:42.974645 | orchestrator | 64 bytes from 192.168.112.164: icmp_seq=2 ttl=63 time=2.09 ms 2026-04-10 01:26:43.975065 | orchestrator | 64 bytes from 192.168.112.164: icmp_seq=3 ttl=63 time=1.40 ms 2026-04-10 01:26:43.975146 | orchestrator | 2026-04-10 01:26:43.975155 | orchestrator | --- 192.168.112.164 ping statistics --- 2026-04-10 01:26:43.975163 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-10 01:26:43.975194 | orchestrator | rtt min/avg/max/mdev = 1.395/3.834/8.022/2.974 ms 2026-04-10 01:26:43.975663 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-10 01:26:43.975704 | orchestrator | + ping -c3 192.168.112.108 2026-04-10 01:26:43.983226 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2026-04-10 01:26:43.983283 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=4.66 ms 2026-04-10 01:26:44.983424 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=1.95 ms 2026-04-10 01:26:45.983880 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=1.80 ms 2026-04-10 01:26:45.984772 | orchestrator | 2026-04-10 01:26:45.984826 | orchestrator | --- 192.168.112.108 ping statistics --- 2026-04-10 01:26:45.984838 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-10 01:26:45.984909 | orchestrator | rtt min/avg/max/mdev = 1.803/2.805/4.661/1.313 ms 2026-04-10 01:26:45.985110 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-10 01:26:45.985123 | orchestrator | + ping -c3 192.168.112.179 2026-04-10 01:26:45.995256 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2026-04-10 01:26:45.995334 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=6.16 ms 2026-04-10 01:26:46.992590 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=2.19 ms 2026-04-10 01:26:47.994370 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=1.86 ms 2026-04-10 01:26:47.994461 | orchestrator | 2026-04-10 01:26:47.994474 | orchestrator | --- 192.168.112.179 ping statistics --- 2026-04-10 01:26:47.994482 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-10 01:26:47.994489 | orchestrator | rtt min/avg/max/mdev = 1.864/3.404/6.162/1.954 ms 2026-04-10 01:26:47.995096 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-10 01:26:47.995159 | orchestrator | + ping -c3 192.168.112.112 2026-04-10 01:26:48.003822 | orchestrator | PING 192.168.112.112 (192.168.112.112) 56(84) bytes of data. 2026-04-10 01:26:48.003912 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=1 ttl=63 time=4.85 ms 2026-04-10 01:26:49.001991 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=2 ttl=63 time=1.81 ms 2026-04-10 01:26:50.003110 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=3 ttl=63 time=1.62 ms 2026-04-10 01:26:50.003692 | orchestrator | 2026-04-10 01:26:50.003727 | orchestrator | --- 192.168.112.112 ping statistics --- 2026-04-10 01:26:50.003735 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-10 01:26:50.003741 | orchestrator | rtt min/avg/max/mdev = 1.622/2.758/4.849/1.479 ms 2026-04-10 01:26:50.003751 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2026-04-10 01:26:51.622168 | orchestrator | 2026-04-10 01:26:51 | ERROR  | Unable to get ansible vault password 2026-04-10 01:26:51.622291 | orchestrator | 2026-04-10 01:26:51 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-10 01:26:51.622305 | orchestrator | 2026-04-10 01:26:51 | ERROR  | Dropping encrypted entries 2026-04-10 01:26:53.207253 | orchestrator | 2026-04-10 01:26:53 | INFO  | Live migrating server 2e86ece9-5403-4d48-b0d3-813e4a827038 2026-04-10 01:27:05.831615 | orchestrator | 2026-04-10 01:27:05 | INFO  | Live migration of 2e86ece9-5403-4d48-b0d3-813e4a827038 (test-4) is still in progress 2026-04-10 01:27:08.254358 | orchestrator | 2026-04-10 01:27:08 | INFO  | Live migration of 2e86ece9-5403-4d48-b0d3-813e4a827038 (test-4) is still in progress 2026-04-10 01:27:10.650947 | orchestrator | 2026-04-10 01:27:10 | INFO  | Live migration of 2e86ece9-5403-4d48-b0d3-813e4a827038 (test-4) is still in progress 2026-04-10 01:27:13.114971 | orchestrator | 2026-04-10 01:27:13 | INFO  | Live migration of 2e86ece9-5403-4d48-b0d3-813e4a827038 (test-4) is still in progress 2026-04-10 01:27:15.453869 | orchestrator | 2026-04-10 01:27:15 | INFO  | Live migration of 2e86ece9-5403-4d48-b0d3-813e4a827038 (test-4) is still in progress 2026-04-10 01:27:17.800744 | orchestrator | 2026-04-10 01:27:17 | INFO  | Live migration of 2e86ece9-5403-4d48-b0d3-813e4a827038 (test-4) is still in progress 2026-04-10 01:27:20.240436 | orchestrator | 2026-04-10 01:27:20 | INFO  | Live migration of 2e86ece9-5403-4d48-b0d3-813e4a827038 (test-4) is still in progress 2026-04-10 01:27:22.584649 | orchestrator | 2026-04-10 01:27:22 | INFO  | Live migration of 2e86ece9-5403-4d48-b0d3-813e4a827038 (test-4) is still in progress 2026-04-10 01:27:24.805598 | orchestrator | 2026-04-10 01:27:24 | INFO  | Live migration of 2e86ece9-5403-4d48-b0d3-813e4a827038 (test-4) completed with status ACTIVE 2026-04-10 01:27:24.805652 | orchestrator | 2026-04-10 01:27:24 | INFO  | Live migrating server 50f46a7b-0771-478a-a406-4febd5208272 2026-04-10 01:27:36.683917 | orchestrator | 2026-04-10 01:27:36 | INFO  | Live migration of 50f46a7b-0771-478a-a406-4febd5208272 (test-3) is still in progress 2026-04-10 01:27:39.027457 | orchestrator | 2026-04-10 01:27:39 | INFO  | Live migration of 50f46a7b-0771-478a-a406-4febd5208272 (test-3) is still in progress 2026-04-10 01:27:41.390928 | orchestrator | 2026-04-10 01:27:41 | INFO  | Live migration of 50f46a7b-0771-478a-a406-4febd5208272 (test-3) is still in progress 2026-04-10 01:27:43.707136 | orchestrator | 2026-04-10 01:27:43 | INFO  | Live migration of 50f46a7b-0771-478a-a406-4febd5208272 (test-3) is still in progress 2026-04-10 01:27:45.915632 | orchestrator | 2026-04-10 01:27:45 | INFO  | Live migration of 50f46a7b-0771-478a-a406-4febd5208272 (test-3) is still in progress 2026-04-10 01:27:48.172313 | orchestrator | 2026-04-10 01:27:48 | INFO  | Live migration of 50f46a7b-0771-478a-a406-4febd5208272 (test-3) is still in progress 2026-04-10 01:27:50.479517 | orchestrator | 2026-04-10 01:27:50 | INFO  | Live migration of 50f46a7b-0771-478a-a406-4febd5208272 (test-3) is still in progress 2026-04-10 01:27:52.771756 | orchestrator | 2026-04-10 01:27:52 | INFO  | Live migration of 50f46a7b-0771-478a-a406-4febd5208272 (test-3) is still in progress 2026-04-10 01:27:55.116164 | orchestrator | 2026-04-10 01:27:55 | INFO  | Live migration of 50f46a7b-0771-478a-a406-4febd5208272 (test-3) completed with status ACTIVE 2026-04-10 01:27:55.116239 | orchestrator | 2026-04-10 01:27:55 | INFO  | Live migrating server d0dd826e-8a47-487a-9b70-913ce8a64b03 2026-04-10 01:28:05.177229 | orchestrator | 2026-04-10 01:28:05 | INFO  | Live migration of d0dd826e-8a47-487a-9b70-913ce8a64b03 (test-2) is still in progress 2026-04-10 01:28:07.440885 | orchestrator | 2026-04-10 01:28:07 | INFO  | Live migration of d0dd826e-8a47-487a-9b70-913ce8a64b03 (test-2) is still in progress 2026-04-10 01:28:09.798337 | orchestrator | 2026-04-10 01:28:09 | INFO  | Live migration of d0dd826e-8a47-487a-9b70-913ce8a64b03 (test-2) is still in progress 2026-04-10 01:28:12.081199 | orchestrator | 2026-04-10 01:28:12 | INFO  | Live migration of d0dd826e-8a47-487a-9b70-913ce8a64b03 (test-2) is still in progress 2026-04-10 01:28:14.399423 | orchestrator | 2026-04-10 01:28:14 | INFO  | Live migration of d0dd826e-8a47-487a-9b70-913ce8a64b03 (test-2) is still in progress 2026-04-10 01:28:16.691254 | orchestrator | 2026-04-10 01:28:16 | INFO  | Live migration of d0dd826e-8a47-487a-9b70-913ce8a64b03 (test-2) is still in progress 2026-04-10 01:28:19.004860 | orchestrator | 2026-04-10 01:28:19 | INFO  | Live migration of d0dd826e-8a47-487a-9b70-913ce8a64b03 (test-2) is still in progress 2026-04-10 01:28:21.278901 | orchestrator | 2026-04-10 01:28:21 | INFO  | Live migration of d0dd826e-8a47-487a-9b70-913ce8a64b03 (test-2) is still in progress 2026-04-10 01:28:23.740379 | orchestrator | 2026-04-10 01:28:23 | INFO  | Live migration of d0dd826e-8a47-487a-9b70-913ce8a64b03 (test-2) completed with status ACTIVE 2026-04-10 01:28:23.740468 | orchestrator | 2026-04-10 01:28:23 | INFO  | Live migrating server 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 2026-04-10 01:28:34.441687 | orchestrator | 2026-04-10 01:28:34 | INFO  | Live migration of 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 (test-1) is still in progress 2026-04-10 01:28:36.808832 | orchestrator | 2026-04-10 01:28:36 | INFO  | Live migration of 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 (test-1) is still in progress 2026-04-10 01:28:39.138477 | orchestrator | 2026-04-10 01:28:39 | INFO  | Live migration of 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 (test-1) is still in progress 2026-04-10 01:28:41.493162 | orchestrator | 2026-04-10 01:28:41 | INFO  | Live migration of 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 (test-1) is still in progress 2026-04-10 01:28:43.686117 | orchestrator | 2026-04-10 01:28:43 | INFO  | Live migration of 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 (test-1) is still in progress 2026-04-10 01:28:45.963048 | orchestrator | 2026-04-10 01:28:45 | INFO  | Live migration of 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 (test-1) is still in progress 2026-04-10 01:28:48.270851 | orchestrator | 2026-04-10 01:28:48 | INFO  | Live migration of 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 (test-1) is still in progress 2026-04-10 01:28:50.498114 | orchestrator | 2026-04-10 01:28:50 | INFO  | Live migration of 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 (test-1) is still in progress 2026-04-10 01:28:52.824180 | orchestrator | 2026-04-10 01:28:52 | INFO  | Live migration of 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 (test-1) is still in progress 2026-04-10 01:28:55.160983 | orchestrator | 2026-04-10 01:28:55 | INFO  | Live migration of 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 (test-1) completed with status ACTIVE 2026-04-10 01:28:55.161066 | orchestrator | 2026-04-10 01:28:55 | INFO  | Live migrating server 8dd2d72c-6249-4335-8abd-e14e6e1198dd 2026-04-10 01:29:06.222491 | orchestrator | 2026-04-10 01:29:06 | INFO  | Live migration of 8dd2d72c-6249-4335-8abd-e14e6e1198dd (test) is still in progress 2026-04-10 01:29:08.624433 | orchestrator | 2026-04-10 01:29:08 | INFO  | Live migration of 8dd2d72c-6249-4335-8abd-e14e6e1198dd (test) is still in progress 2026-04-10 01:29:10.913477 | orchestrator | 2026-04-10 01:29:10 | INFO  | Live migration of 8dd2d72c-6249-4335-8abd-e14e6e1198dd (test) is still in progress 2026-04-10 01:29:13.253577 | orchestrator | 2026-04-10 01:29:13 | INFO  | Live migration of 8dd2d72c-6249-4335-8abd-e14e6e1198dd (test) is still in progress 2026-04-10 01:29:15.585538 | orchestrator | 2026-04-10 01:29:15 | INFO  | Live migration of 8dd2d72c-6249-4335-8abd-e14e6e1198dd (test) is still in progress 2026-04-10 01:29:17.960157 | orchestrator | 2026-04-10 01:29:17 | INFO  | Live migration of 8dd2d72c-6249-4335-8abd-e14e6e1198dd (test) is still in progress 2026-04-10 01:29:20.198127 | orchestrator | 2026-04-10 01:29:20 | INFO  | Live migration of 8dd2d72c-6249-4335-8abd-e14e6e1198dd (test) is still in progress 2026-04-10 01:29:22.557842 | orchestrator | 2026-04-10 01:29:22 | INFO  | Live migration of 8dd2d72c-6249-4335-8abd-e14e6e1198dd (test) is still in progress 2026-04-10 01:29:24.903890 | orchestrator | 2026-04-10 01:29:24 | INFO  | Live migration of 8dd2d72c-6249-4335-8abd-e14e6e1198dd (test) is still in progress 2026-04-10 01:29:27.205226 | orchestrator | 2026-04-10 01:29:27 | INFO  | Live migration of 8dd2d72c-6249-4335-8abd-e14e6e1198dd (test) is still in progress 2026-04-10 01:29:29.502282 | orchestrator | 2026-04-10 01:29:29 | INFO  | Live migration of 8dd2d72c-6249-4335-8abd-e14e6e1198dd (test) completed with status ACTIVE 2026-04-10 01:29:29.772778 | orchestrator | + compute_list 2026-04-10 01:29:29.772869 | orchestrator | + osism manage compute list testbed-node-3 2026-04-10 01:29:31.409578 | orchestrator | 2026-04-10 01:29:31 | ERROR  | Unable to get ansible vault password 2026-04-10 01:29:31.409745 | orchestrator | 2026-04-10 01:29:31 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-10 01:29:31.409767 | orchestrator | 2026-04-10 01:29:31 | ERROR  | Dropping encrypted entries 2026-04-10 01:29:32.551663 | orchestrator | +------+--------+----------+ 2026-04-10 01:29:32.551786 | orchestrator | | ID | Name | Status | 2026-04-10 01:29:32.551799 | orchestrator | |------+--------+----------| 2026-04-10 01:29:32.551806 | orchestrator | +------+--------+----------+ 2026-04-10 01:29:32.846746 | orchestrator | + osism manage compute list testbed-node-4 2026-04-10 01:29:34.370184 | orchestrator | 2026-04-10 01:29:34 | ERROR  | Unable to get ansible vault password 2026-04-10 01:29:34.370260 | orchestrator | 2026-04-10 01:29:34 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-10 01:29:34.370269 | orchestrator | 2026-04-10 01:29:34 | ERROR  | Dropping encrypted entries 2026-04-10 01:29:35.908416 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-10 01:29:35.908493 | orchestrator | | ID | Name | Status | 2026-04-10 01:29:35.908499 | orchestrator | |--------------------------------------+--------+----------| 2026-04-10 01:29:35.908504 | orchestrator | | 2e86ece9-5403-4d48-b0d3-813e4a827038 | test-4 | ACTIVE | 2026-04-10 01:29:35.908509 | orchestrator | | 50f46a7b-0771-478a-a406-4febd5208272 | test-3 | ACTIVE | 2026-04-10 01:29:35.908514 | orchestrator | | d0dd826e-8a47-487a-9b70-913ce8a64b03 | test-2 | ACTIVE | 2026-04-10 01:29:35.908518 | orchestrator | | 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 | test-1 | ACTIVE | 2026-04-10 01:29:35.908522 | orchestrator | | 8dd2d72c-6249-4335-8abd-e14e6e1198dd | test | ACTIVE | 2026-04-10 01:29:35.908526 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-10 01:29:36.193494 | orchestrator | + osism manage compute list testbed-node-5 2026-04-10 01:29:37.707687 | orchestrator | 2026-04-10 01:29:37 | ERROR  | Unable to get ansible vault password 2026-04-10 01:29:37.707772 | orchestrator | 2026-04-10 01:29:37 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-10 01:29:37.707780 | orchestrator | 2026-04-10 01:29:37 | ERROR  | Dropping encrypted entries 2026-04-10 01:29:38.932159 | orchestrator | +------+--------+----------+ 2026-04-10 01:29:38.932235 | orchestrator | | ID | Name | Status | 2026-04-10 01:29:38.932244 | orchestrator | |------+--------+----------| 2026-04-10 01:29:38.932251 | orchestrator | +------+--------+----------+ 2026-04-10 01:29:39.220809 | orchestrator | + server_ping 2026-04-10 01:29:39.221385 | orchestrator | ++ tr -d '\r' 2026-04-10 01:29:39.221479 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-10 01:29:42.113424 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-10 01:29:42.113496 | orchestrator | + ping -c3 192.168.112.101 2026-04-10 01:29:42.123846 | orchestrator | PING 192.168.112.101 (192.168.112.101) 56(84) bytes of data. 2026-04-10 01:29:42.123955 | orchestrator | 64 bytes from 192.168.112.101: icmp_seq=1 ttl=63 time=8.38 ms 2026-04-10 01:29:43.119273 | orchestrator | 64 bytes from 192.168.112.101: icmp_seq=2 ttl=63 time=2.03 ms 2026-04-10 01:29:44.120416 | orchestrator | 64 bytes from 192.168.112.101: icmp_seq=3 ttl=63 time=1.68 ms 2026-04-10 01:29:44.120492 | orchestrator | 2026-04-10 01:29:44.120499 | orchestrator | --- 192.168.112.101 ping statistics --- 2026-04-10 01:29:44.120505 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-10 01:29:44.120510 | orchestrator | rtt min/avg/max/mdev = 1.681/4.031/8.384/3.081 ms 2026-04-10 01:29:44.121344 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-10 01:29:44.121371 | orchestrator | + ping -c3 192.168.112.164 2026-04-10 01:29:44.131270 | orchestrator | PING 192.168.112.164 (192.168.112.164) 56(84) bytes of data. 2026-04-10 01:29:44.131355 | orchestrator | 64 bytes from 192.168.112.164: icmp_seq=1 ttl=63 time=5.98 ms 2026-04-10 01:29:45.128558 | orchestrator | 64 bytes from 192.168.112.164: icmp_seq=2 ttl=63 time=2.23 ms 2026-04-10 01:29:46.129554 | orchestrator | 64 bytes from 192.168.112.164: icmp_seq=3 ttl=63 time=1.83 ms 2026-04-10 01:29:46.130211 | orchestrator | 2026-04-10 01:29:46.130260 | orchestrator | --- 192.168.112.164 ping statistics --- 2026-04-10 01:29:46.130269 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-10 01:29:46.130276 | orchestrator | rtt min/avg/max/mdev = 1.828/3.345/5.981/1.870 ms 2026-04-10 01:29:46.130519 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-10 01:29:46.130539 | orchestrator | + ping -c3 192.168.112.108 2026-04-10 01:29:46.145824 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2026-04-10 01:29:46.145927 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=10.3 ms 2026-04-10 01:29:47.138237 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=2.03 ms 2026-04-10 01:29:48.139981 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=1.84 ms 2026-04-10 01:29:48.140092 | orchestrator | 2026-04-10 01:29:48.140105 | orchestrator | --- 192.168.112.108 ping statistics --- 2026-04-10 01:29:48.140114 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-10 01:29:48.140123 | orchestrator | rtt min/avg/max/mdev = 1.843/4.722/10.298/3.943 ms 2026-04-10 01:29:48.140131 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-10 01:29:48.140138 | orchestrator | + ping -c3 192.168.112.179 2026-04-10 01:29:48.153035 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2026-04-10 01:29:48.153124 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=7.77 ms 2026-04-10 01:29:49.148597 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=1.49 ms 2026-04-10 01:29:50.149133 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=1.30 ms 2026-04-10 01:29:50.149192 | orchestrator | 2026-04-10 01:29:50.149199 | orchestrator | --- 192.168.112.179 ping statistics --- 2026-04-10 01:29:50.149205 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-10 01:29:50.149209 | orchestrator | rtt min/avg/max/mdev = 1.299/3.517/7.767/3.006 ms 2026-04-10 01:29:50.149593 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-10 01:29:50.149612 | orchestrator | + ping -c3 192.168.112.112 2026-04-10 01:29:50.156041 | orchestrator | PING 192.168.112.112 (192.168.112.112) 56(84) bytes of data. 2026-04-10 01:29:50.156100 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=1 ttl=63 time=4.02 ms 2026-04-10 01:29:51.155518 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=2 ttl=63 time=2.11 ms 2026-04-10 01:29:52.157027 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=3 ttl=63 time=1.88 ms 2026-04-10 01:29:52.157180 | orchestrator | 2026-04-10 01:29:52.157192 | orchestrator | --- 192.168.112.112 ping statistics --- 2026-04-10 01:29:52.157201 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-10 01:29:52.157212 | orchestrator | rtt min/avg/max/mdev = 1.875/2.670/4.024/0.962 ms 2026-04-10 01:29:52.157298 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2026-04-10 01:29:53.750966 | orchestrator | 2026-04-10 01:29:53 | ERROR  | Unable to get ansible vault password 2026-04-10 01:29:53.751622 | orchestrator | 2026-04-10 01:29:53 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-10 01:29:53.751672 | orchestrator | 2026-04-10 01:29:53 | ERROR  | Dropping encrypted entries 2026-04-10 01:29:55.396018 | orchestrator | 2026-04-10 01:29:55 | INFO  | Live migrating server 2e86ece9-5403-4d48-b0d3-813e4a827038 2026-04-10 01:30:05.127992 | orchestrator | 2026-04-10 01:30:05 | INFO  | Live migration of 2e86ece9-5403-4d48-b0d3-813e4a827038 (test-4) is still in progress 2026-04-10 01:30:07.392228 | orchestrator | 2026-04-10 01:30:07 | INFO  | Live migration of 2e86ece9-5403-4d48-b0d3-813e4a827038 (test-4) is still in progress 2026-04-10 01:30:09.711836 | orchestrator | 2026-04-10 01:30:09 | INFO  | Live migration of 2e86ece9-5403-4d48-b0d3-813e4a827038 (test-4) is still in progress 2026-04-10 01:30:12.005456 | orchestrator | 2026-04-10 01:30:12 | INFO  | Live migration of 2e86ece9-5403-4d48-b0d3-813e4a827038 (test-4) is still in progress 2026-04-10 01:30:14.308937 | orchestrator | 2026-04-10 01:30:14 | INFO  | Live migration of 2e86ece9-5403-4d48-b0d3-813e4a827038 (test-4) is still in progress 2026-04-10 01:30:16.608456 | orchestrator | 2026-04-10 01:30:16 | INFO  | Live migration of 2e86ece9-5403-4d48-b0d3-813e4a827038 (test-4) is still in progress 2026-04-10 01:30:18.800552 | orchestrator | 2026-04-10 01:30:18 | INFO  | Live migration of 2e86ece9-5403-4d48-b0d3-813e4a827038 (test-4) is still in progress 2026-04-10 01:30:21.027116 | orchestrator | 2026-04-10 01:30:21 | INFO  | Live migration of 2e86ece9-5403-4d48-b0d3-813e4a827038 (test-4) is still in progress 2026-04-10 01:30:23.303235 | orchestrator | 2026-04-10 01:30:23 | INFO  | Live migration of 2e86ece9-5403-4d48-b0d3-813e4a827038 (test-4) completed with status ACTIVE 2026-04-10 01:30:23.303320 | orchestrator | 2026-04-10 01:30:23 | INFO  | Live migrating server 50f46a7b-0771-478a-a406-4febd5208272 2026-04-10 01:30:33.393752 | orchestrator | 2026-04-10 01:30:33 | INFO  | Live migration of 50f46a7b-0771-478a-a406-4febd5208272 (test-3) is still in progress 2026-04-10 01:30:35.711697 | orchestrator | 2026-04-10 01:30:35 | INFO  | Live migration of 50f46a7b-0771-478a-a406-4febd5208272 (test-3) is still in progress 2026-04-10 01:30:38.065714 | orchestrator | 2026-04-10 01:30:38 | INFO  | Live migration of 50f46a7b-0771-478a-a406-4febd5208272 (test-3) is still in progress 2026-04-10 01:30:40.384723 | orchestrator | 2026-04-10 01:30:40 | INFO  | Live migration of 50f46a7b-0771-478a-a406-4febd5208272 (test-3) is still in progress 2026-04-10 01:30:42.594182 | orchestrator | 2026-04-10 01:30:42 | INFO  | Live migration of 50f46a7b-0771-478a-a406-4febd5208272 (test-3) is still in progress 2026-04-10 01:30:44.782127 | orchestrator | 2026-04-10 01:30:44 | INFO  | Live migration of 50f46a7b-0771-478a-a406-4febd5208272 (test-3) is still in progress 2026-04-10 01:30:47.079460 | orchestrator | 2026-04-10 01:30:47 | INFO  | Live migration of 50f46a7b-0771-478a-a406-4febd5208272 (test-3) is still in progress 2026-04-10 01:30:49.381073 | orchestrator | 2026-04-10 01:30:49 | INFO  | Live migration of 50f46a7b-0771-478a-a406-4febd5208272 (test-3) is still in progress 2026-04-10 01:30:51.714099 | orchestrator | 2026-04-10 01:30:51 | INFO  | Live migration of 50f46a7b-0771-478a-a406-4febd5208272 (test-3) completed with status ACTIVE 2026-04-10 01:30:51.714255 | orchestrator | 2026-04-10 01:30:51 | INFO  | Live migrating server d0dd826e-8a47-487a-9b70-913ce8a64b03 2026-04-10 01:31:00.827547 | orchestrator | 2026-04-10 01:31:00 | INFO  | Live migration of d0dd826e-8a47-487a-9b70-913ce8a64b03 (test-2) is still in progress 2026-04-10 01:31:03.203422 | orchestrator | 2026-04-10 01:31:03 | INFO  | Live migration of d0dd826e-8a47-487a-9b70-913ce8a64b03 (test-2) is still in progress 2026-04-10 01:31:05.591701 | orchestrator | 2026-04-10 01:31:05 | INFO  | Live migration of d0dd826e-8a47-487a-9b70-913ce8a64b03 (test-2) is still in progress 2026-04-10 01:31:07.873687 | orchestrator | 2026-04-10 01:31:07 | INFO  | Live migration of d0dd826e-8a47-487a-9b70-913ce8a64b03 (test-2) is still in progress 2026-04-10 01:31:10.205007 | orchestrator | 2026-04-10 01:31:10 | INFO  | Live migration of d0dd826e-8a47-487a-9b70-913ce8a64b03 (test-2) is still in progress 2026-04-10 01:31:12.481366 | orchestrator | 2026-04-10 01:31:12 | INFO  | Live migration of d0dd826e-8a47-487a-9b70-913ce8a64b03 (test-2) is still in progress 2026-04-10 01:31:14.819186 | orchestrator | 2026-04-10 01:31:14 | INFO  | Live migration of d0dd826e-8a47-487a-9b70-913ce8a64b03 (test-2) is still in progress 2026-04-10 01:31:17.112533 | orchestrator | 2026-04-10 01:31:17 | INFO  | Live migration of d0dd826e-8a47-487a-9b70-913ce8a64b03 (test-2) is still in progress 2026-04-10 01:31:19.410339 | orchestrator | 2026-04-10 01:31:19 | INFO  | Live migration of d0dd826e-8a47-487a-9b70-913ce8a64b03 (test-2) completed with status ACTIVE 2026-04-10 01:31:19.410415 | orchestrator | 2026-04-10 01:31:19 | INFO  | Live migrating server 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 2026-04-10 01:31:29.535056 | orchestrator | 2026-04-10 01:31:29 | INFO  | Live migration of 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 (test-1) is still in progress 2026-04-10 01:31:31.792157 | orchestrator | 2026-04-10 01:31:31 | INFO  | Live migration of 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 (test-1) is still in progress 2026-04-10 01:31:34.148007 | orchestrator | 2026-04-10 01:31:34 | INFO  | Live migration of 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 (test-1) is still in progress 2026-04-10 01:31:36.470391 | orchestrator | 2026-04-10 01:31:36 | INFO  | Live migration of 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 (test-1) is still in progress 2026-04-10 01:31:38.671642 | orchestrator | 2026-04-10 01:31:38 | INFO  | Live migration of 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 (test-1) is still in progress 2026-04-10 01:31:40.907477 | orchestrator | 2026-04-10 01:31:40 | INFO  | Live migration of 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 (test-1) is still in progress 2026-04-10 01:31:43.219132 | orchestrator | 2026-04-10 01:31:43 | INFO  | Live migration of 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 (test-1) is still in progress 2026-04-10 01:31:45.497183 | orchestrator | 2026-04-10 01:31:45 | INFO  | Live migration of 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 (test-1) is still in progress 2026-04-10 01:31:47.778248 | orchestrator | 2026-04-10 01:31:47 | INFO  | Live migration of 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 (test-1) completed with status ACTIVE 2026-04-10 01:31:47.778336 | orchestrator | 2026-04-10 01:31:47 | INFO  | Live migrating server 8dd2d72c-6249-4335-8abd-e14e6e1198dd 2026-04-10 01:31:58.370184 | orchestrator | 2026-04-10 01:31:58 | INFO  | Live migration of 8dd2d72c-6249-4335-8abd-e14e6e1198dd (test) is still in progress 2026-04-10 01:32:00.736491 | orchestrator | 2026-04-10 01:32:00 | INFO  | Live migration of 8dd2d72c-6249-4335-8abd-e14e6e1198dd (test) is still in progress 2026-04-10 01:32:03.005476 | orchestrator | 2026-04-10 01:32:03 | INFO  | Live migration of 8dd2d72c-6249-4335-8abd-e14e6e1198dd (test) is still in progress 2026-04-10 01:32:05.373134 | orchestrator | 2026-04-10 01:32:05 | INFO  | Live migration of 8dd2d72c-6249-4335-8abd-e14e6e1198dd (test) is still in progress 2026-04-10 01:32:07.723900 | orchestrator | 2026-04-10 01:32:07 | INFO  | Live migration of 8dd2d72c-6249-4335-8abd-e14e6e1198dd (test) is still in progress 2026-04-10 01:32:10.003131 | orchestrator | 2026-04-10 01:32:10 | INFO  | Live migration of 8dd2d72c-6249-4335-8abd-e14e6e1198dd (test) is still in progress 2026-04-10 01:32:12.379614 | orchestrator | 2026-04-10 01:32:12 | INFO  | Live migration of 8dd2d72c-6249-4335-8abd-e14e6e1198dd (test) is still in progress 2026-04-10 01:32:14.945730 | orchestrator | 2026-04-10 01:32:14 | INFO  | Live migration of 8dd2d72c-6249-4335-8abd-e14e6e1198dd (test) is still in progress 2026-04-10 01:32:17.325442 | orchestrator | 2026-04-10 01:32:17 | INFO  | Live migration of 8dd2d72c-6249-4335-8abd-e14e6e1198dd (test) is still in progress 2026-04-10 01:32:19.555611 | orchestrator | 2026-04-10 01:32:19 | INFO  | Live migration of 8dd2d72c-6249-4335-8abd-e14e6e1198dd (test) is still in progress 2026-04-10 01:32:21.861509 | orchestrator | 2026-04-10 01:32:21 | INFO  | Live migration of 8dd2d72c-6249-4335-8abd-e14e6e1198dd (test) is still in progress 2026-04-10 01:32:24.363153 | orchestrator | 2026-04-10 01:32:24 | INFO  | Live migration of 8dd2d72c-6249-4335-8abd-e14e6e1198dd (test) completed with status ACTIVE 2026-04-10 01:32:24.682288 | orchestrator | + compute_list 2026-04-10 01:32:24.682362 | orchestrator | + osism manage compute list testbed-node-3 2026-04-10 01:32:26.340871 | orchestrator | 2026-04-10 01:32:26 | ERROR  | Unable to get ansible vault password 2026-04-10 01:32:26.340991 | orchestrator | 2026-04-10 01:32:26 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-10 01:32:26.341013 | orchestrator | 2026-04-10 01:32:26 | ERROR  | Dropping encrypted entries 2026-04-10 01:32:27.476933 | orchestrator | +------+--------+----------+ 2026-04-10 01:32:27.477007 | orchestrator | | ID | Name | Status | 2026-04-10 01:32:27.477014 | orchestrator | |------+--------+----------| 2026-04-10 01:32:27.477018 | orchestrator | +------+--------+----------+ 2026-04-10 01:32:27.803070 | orchestrator | + osism manage compute list testbed-node-4 2026-04-10 01:32:29.429090 | orchestrator | 2026-04-10 01:32:29 | ERROR  | Unable to get ansible vault password 2026-04-10 01:32:29.429182 | orchestrator | 2026-04-10 01:32:29 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-10 01:32:29.429191 | orchestrator | 2026-04-10 01:32:29 | ERROR  | Dropping encrypted entries 2026-04-10 01:32:30.568417 | orchestrator | +------+--------+----------+ 2026-04-10 01:32:30.568510 | orchestrator | | ID | Name | Status | 2026-04-10 01:32:30.568519 | orchestrator | |------+--------+----------| 2026-04-10 01:32:30.568526 | orchestrator | +------+--------+----------+ 2026-04-10 01:32:30.893490 | orchestrator | + osism manage compute list testbed-node-5 2026-04-10 01:32:32.500947 | orchestrator | 2026-04-10 01:32:32 | ERROR  | Unable to get ansible vault password 2026-04-10 01:32:32.501097 | orchestrator | 2026-04-10 01:32:32 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-10 01:32:32.501108 | orchestrator | 2026-04-10 01:32:32 | ERROR  | Dropping encrypted entries 2026-04-10 01:32:34.070522 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-10 01:32:34.070603 | orchestrator | | ID | Name | Status | 2026-04-10 01:32:34.070611 | orchestrator | |--------------------------------------+--------+----------| 2026-04-10 01:32:34.070616 | orchestrator | | 2e86ece9-5403-4d48-b0d3-813e4a827038 | test-4 | ACTIVE | 2026-04-10 01:32:34.070641 | orchestrator | | 50f46a7b-0771-478a-a406-4febd5208272 | test-3 | ACTIVE | 2026-04-10 01:32:34.070645 | orchestrator | | d0dd826e-8a47-487a-9b70-913ce8a64b03 | test-2 | ACTIVE | 2026-04-10 01:32:34.070649 | orchestrator | | 8adcfcf0-86a7-40cd-8c3e-0ca1a10d1998 | test-1 | ACTIVE | 2026-04-10 01:32:34.070653 | orchestrator | | 8dd2d72c-6249-4335-8abd-e14e6e1198dd | test | ACTIVE | 2026-04-10 01:32:34.070657 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-10 01:32:34.444888 | orchestrator | + server_ping 2026-04-10 01:32:34.446444 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-10 01:32:34.446509 | orchestrator | ++ tr -d '\r' 2026-04-10 01:32:37.450446 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-10 01:32:37.450528 | orchestrator | + ping -c3 192.168.112.101 2026-04-10 01:32:37.462311 | orchestrator | PING 192.168.112.101 (192.168.112.101) 56(84) bytes of data. 2026-04-10 01:32:37.462388 | orchestrator | 64 bytes from 192.168.112.101: icmp_seq=1 ttl=63 time=7.75 ms 2026-04-10 01:32:38.457961 | orchestrator | 64 bytes from 192.168.112.101: icmp_seq=2 ttl=63 time=1.45 ms 2026-04-10 01:32:39.460552 | orchestrator | 64 bytes from 192.168.112.101: icmp_seq=3 ttl=63 time=1.68 ms 2026-04-10 01:32:39.460605 | orchestrator | 2026-04-10 01:32:39.460611 | orchestrator | --- 192.168.112.101 ping statistics --- 2026-04-10 01:32:39.460616 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-10 01:32:39.460620 | orchestrator | rtt min/avg/max/mdev = 1.448/3.626/7.752/2.918 ms 2026-04-10 01:32:39.460832 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-10 01:32:39.460883 | orchestrator | + ping -c3 192.168.112.164 2026-04-10 01:32:39.469520 | orchestrator | PING 192.168.112.164 (192.168.112.164) 56(84) bytes of data. 2026-04-10 01:32:39.469569 | orchestrator | 64 bytes from 192.168.112.164: icmp_seq=1 ttl=63 time=3.63 ms 2026-04-10 01:32:40.470282 | orchestrator | 64 bytes from 192.168.112.164: icmp_seq=2 ttl=63 time=2.26 ms 2026-04-10 01:32:41.471439 | orchestrator | 64 bytes from 192.168.112.164: icmp_seq=3 ttl=63 time=1.90 ms 2026-04-10 01:32:41.471518 | orchestrator | 2026-04-10 01:32:41.471524 | orchestrator | --- 192.168.112.164 ping statistics --- 2026-04-10 01:32:41.471530 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-10 01:32:41.471535 | orchestrator | rtt min/avg/max/mdev = 1.904/2.597/3.627/0.742 ms 2026-04-10 01:32:41.472390 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-10 01:32:41.472449 | orchestrator | + ping -c3 192.168.112.108 2026-04-10 01:32:41.487196 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2026-04-10 01:32:41.487272 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=9.76 ms 2026-04-10 01:32:42.480578 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=1.95 ms 2026-04-10 01:32:43.481658 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=1.56 ms 2026-04-10 01:32:43.481853 | orchestrator | 2026-04-10 01:32:43.481869 | orchestrator | --- 192.168.112.108 ping statistics --- 2026-04-10 01:32:43.481876 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-10 01:32:43.481880 | orchestrator | rtt min/avg/max/mdev = 1.556/4.419/9.756/3.776 ms 2026-04-10 01:32:43.481942 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-10 01:32:43.481948 | orchestrator | + ping -c3 192.168.112.179 2026-04-10 01:32:43.496728 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2026-04-10 01:32:43.496914 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=8.97 ms 2026-04-10 01:32:44.491711 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=2.05 ms 2026-04-10 01:32:45.491956 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=1.84 ms 2026-04-10 01:32:45.492045 | orchestrator | 2026-04-10 01:32:45.492052 | orchestrator | --- 192.168.112.179 ping statistics --- 2026-04-10 01:32:45.492059 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-10 01:32:45.492064 | orchestrator | rtt min/avg/max/mdev = 1.835/4.286/8.971/3.313 ms 2026-04-10 01:32:45.492089 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-10 01:32:45.493006 | orchestrator | + ping -c3 192.168.112.112 2026-04-10 01:32:45.499036 | orchestrator | PING 192.168.112.112 (192.168.112.112) 56(84) bytes of data. 2026-04-10 01:32:45.499114 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=1 ttl=63 time=4.91 ms 2026-04-10 01:32:46.498308 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=2 ttl=63 time=2.01 ms 2026-04-10 01:32:47.499161 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=3 ttl=63 time=1.65 ms 2026-04-10 01:32:47.499239 | orchestrator | 2026-04-10 01:32:47.499247 | orchestrator | --- 192.168.112.112 ping statistics --- 2026-04-10 01:32:47.499253 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-10 01:32:47.499258 | orchestrator | rtt min/avg/max/mdev = 1.650/2.857/4.914/1.461 ms 2026-04-10 01:32:47.713435 | orchestrator | ok: Runtime: 0:17:58.141737 2026-04-10 01:32:47.764292 | 2026-04-10 01:32:47.764436 | TASK [Run tempest] 2026-04-10 01:32:48.506543 | orchestrator | 2026-04-10 01:32:48.506681 | orchestrator | # Tempest 2026-04-10 01:32:48.506692 | orchestrator | 2026-04-10 01:32:48.506697 | orchestrator | + set -e 2026-04-10 01:32:48.506704 | orchestrator | + source /opt/manager-vars.sh 2026-04-10 01:32:48.506711 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-10 01:32:48.506720 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-10 01:32:48.506762 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-10 01:32:48.506776 | orchestrator | ++ CEPH_VERSION=reef 2026-04-10 01:32:48.506789 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-10 01:32:48.506797 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-10 01:32:48.506810 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-10 01:32:48.506820 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-10 01:32:48.506828 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-10 01:32:48.506838 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-10 01:32:48.506845 | orchestrator | ++ export ARA=false 2026-04-10 01:32:48.506853 | orchestrator | ++ ARA=false 2026-04-10 01:32:48.506869 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-10 01:32:48.506877 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-10 01:32:48.506883 | orchestrator | ++ export TEMPEST=true 2026-04-10 01:32:48.506892 | orchestrator | ++ TEMPEST=true 2026-04-10 01:32:48.506898 | orchestrator | ++ export IS_ZUUL=true 2026-04-10 01:32:48.506904 | orchestrator | ++ IS_ZUUL=true 2026-04-10 01:32:48.506914 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.34 2026-04-10 01:32:48.506920 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.34 2026-04-10 01:32:48.506926 | orchestrator | ++ export EXTERNAL_API=false 2026-04-10 01:32:48.506933 | orchestrator | ++ EXTERNAL_API=false 2026-04-10 01:32:48.506939 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-10 01:32:48.506945 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-10 01:32:48.506951 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-10 01:32:48.506957 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-10 01:32:48.506964 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-10 01:32:48.506971 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-10 01:32:48.506978 | orchestrator | + echo 2026-04-10 01:32:48.506985 | orchestrator | + echo '# Tempest' 2026-04-10 01:32:48.506992 | orchestrator | + echo 2026-04-10 01:32:48.506997 | orchestrator | + [[ ! -e /opt/tempest ]] 2026-04-10 01:32:48.507001 | orchestrator | + osism apply tempest --skip-tags run-tempest 2026-04-10 01:32:59.888635 | orchestrator | 2026-04-10 01:32:59 | INFO  | Prepare task for execution of tempest. 2026-04-10 01:32:59.962237 | orchestrator | 2026-04-10 01:32:59 | INFO  | Task 0e1fa93a-dbae-4822-83f3-b8c4de8b03fe (tempest) was prepared for execution. 2026-04-10 01:32:59.962320 | orchestrator | 2026-04-10 01:32:59 | INFO  | It takes a moment until task 0e1fa93a-dbae-4822-83f3-b8c4de8b03fe (tempest) has been started and output is visible here. 2026-04-10 01:34:15.802177 | orchestrator | 2026-04-10 01:34:15.802244 | orchestrator | PLAY [Run tempest] ************************************************************* 2026-04-10 01:34:15.802256 | orchestrator | 2026-04-10 01:34:15.802262 | orchestrator | TASK [osism.validations.tempest : Create tempest workdir] ********************** 2026-04-10 01:34:15.802276 | orchestrator | Friday 10 April 2026 01:33:03 +0000 (0:00:00.322) 0:00:00.322 ********** 2026-04-10 01:34:15.802282 | orchestrator | changed: [testbed-manager] 2026-04-10 01:34:15.802287 | orchestrator | 2026-04-10 01:34:15.802294 | orchestrator | TASK [osism.validations.tempest : Copy tempest wrapper script] ***************** 2026-04-10 01:34:15.802300 | orchestrator | Friday 10 April 2026 01:33:04 +0000 (0:00:01.054) 0:00:01.377 ********** 2026-04-10 01:34:15.802305 | orchestrator | changed: [testbed-manager] 2026-04-10 01:34:15.802311 | orchestrator | 2026-04-10 01:34:15.802316 | orchestrator | TASK [osism.validations.tempest : Check for existing tempest initialisation] *** 2026-04-10 01:34:15.802322 | orchestrator | Friday 10 April 2026 01:33:05 +0000 (0:00:01.215) 0:00:02.592 ********** 2026-04-10 01:34:15.802328 | orchestrator | ok: [testbed-manager] 2026-04-10 01:34:15.802335 | orchestrator | 2026-04-10 01:34:15.802341 | orchestrator | TASK [osism.validations.tempest : Init tempest] ******************************** 2026-04-10 01:34:15.802347 | orchestrator | Friday 10 April 2026 01:33:06 +0000 (0:00:00.451) 0:00:03.044 ********** 2026-04-10 01:34:15.802353 | orchestrator | changed: [testbed-manager] 2026-04-10 01:34:15.802359 | orchestrator | 2026-04-10 01:34:15.802366 | orchestrator | TASK [osism.validations.tempest : Resolve image IDs] *************************** 2026-04-10 01:34:15.802386 | orchestrator | Friday 10 April 2026 01:33:26 +0000 (0:00:20.585) 0:00:23.629 ********** 2026-04-10 01:34:15.802408 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.3) 2026-04-10 01:34:15.802413 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.2) 2026-04-10 01:34:15.802419 | orchestrator | 2026-04-10 01:34:15.802426 | orchestrator | TASK [osism.validations.tempest : Assert images have been resolved] ************ 2026-04-10 01:34:15.802432 | orchestrator | Friday 10 April 2026 01:33:34 +0000 (0:00:08.212) 0:00:31.842 ********** 2026-04-10 01:34:15.802439 | orchestrator | ok: [testbed-manager] => { 2026-04-10 01:34:15.802445 | orchestrator |  "changed": false, 2026-04-10 01:34:15.802450 | orchestrator |  "msg": "All assertions passed" 2026-04-10 01:34:15.802455 | orchestrator | } 2026-04-10 01:34:15.802461 | orchestrator | 2026-04-10 01:34:15.802467 | orchestrator | TASK [osism.validations.tempest : Get auth token] ****************************** 2026-04-10 01:34:15.802473 | orchestrator | Friday 10 April 2026 01:33:35 +0000 (0:00:00.166) 0:00:32.009 ********** 2026-04-10 01:34:15.802480 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-10 01:34:15.802486 | orchestrator | 2026-04-10 01:34:15.802491 | orchestrator | TASK [osism.validations.tempest : Get endpoint catalog] ************************ 2026-04-10 01:34:15.802497 | orchestrator | Friday 10 April 2026 01:33:38 +0000 (0:00:03.679) 0:00:35.688 ********** 2026-04-10 01:34:15.802503 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-10 01:34:15.802509 | orchestrator | 2026-04-10 01:34:15.802514 | orchestrator | TASK [osism.validations.tempest : Get service catalog] ************************* 2026-04-10 01:34:15.802520 | orchestrator | Friday 10 April 2026 01:33:40 +0000 (0:00:01.885) 0:00:37.574 ********** 2026-04-10 01:34:15.802526 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-10 01:34:15.802532 | orchestrator | 2026-04-10 01:34:15.802538 | orchestrator | TASK [osism.validations.tempest : Register img_file name] ********************** 2026-04-10 01:34:15.802545 | orchestrator | Friday 10 April 2026 01:33:44 +0000 (0:00:03.826) 0:00:41.400 ********** 2026-04-10 01:34:15.802551 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-10 01:34:15.802557 | orchestrator | 2026-04-10 01:34:15.802564 | orchestrator | TASK [osism.validations.tempest : Download img_file from image_ref] ************ 2026-04-10 01:34:15.802570 | orchestrator | Friday 10 April 2026 01:33:44 +0000 (0:00:00.177) 0:00:41.577 ********** 2026-04-10 01:34:15.802576 | orchestrator | changed: [testbed-manager] 2026-04-10 01:34:15.802583 | orchestrator | 2026-04-10 01:34:15.802589 | orchestrator | TASK [osism.validations.tempest : Install qemu-utils package] ****************** 2026-04-10 01:34:15.802596 | orchestrator | Friday 10 April 2026 01:33:47 +0000 (0:00:02.514) 0:00:44.092 ********** 2026-04-10 01:34:15.802601 | orchestrator | changed: [testbed-manager] 2026-04-10 01:34:15.802605 | orchestrator | 2026-04-10 01:34:15.802609 | orchestrator | TASK [osism.validations.tempest : Convert img_file to qcow2 format] ************ 2026-04-10 01:34:15.802613 | orchestrator | Friday 10 April 2026 01:33:56 +0000 (0:00:08.996) 0:00:53.089 ********** 2026-04-10 01:34:15.802616 | orchestrator | changed: [testbed-manager] 2026-04-10 01:34:15.802620 | orchestrator | 2026-04-10 01:34:15.802624 | orchestrator | TASK [osism.validations.tempest : Get network API extensions] ****************** 2026-04-10 01:34:15.802628 | orchestrator | Friday 10 April 2026 01:33:56 +0000 (0:00:00.685) 0:00:53.774 ********** 2026-04-10 01:34:15.802631 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-10 01:34:15.802635 | orchestrator | 2026-04-10 01:34:15.802639 | orchestrator | TASK [osism.validations.tempest : Revoke token] ******************************** 2026-04-10 01:34:15.802643 | orchestrator | Friday 10 April 2026 01:33:58 +0000 (0:00:01.570) 0:00:55.344 ********** 2026-04-10 01:34:15.802647 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-10 01:34:15.802650 | orchestrator | 2026-04-10 01:34:15.802685 | orchestrator | TASK [osism.validations.tempest : Set fact for config option api_extensions] *** 2026-04-10 01:34:15.802689 | orchestrator | Friday 10 April 2026 01:33:59 +0000 (0:00:01.549) 0:00:56.894 ********** 2026-04-10 01:34:15.802693 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-10 01:34:15.802697 | orchestrator | 2026-04-10 01:34:15.802701 | orchestrator | TASK [osism.validations.tempest : Set fact for config option img_file] ********* 2026-04-10 01:34:15.802711 | orchestrator | Friday 10 April 2026 01:34:00 +0000 (0:00:00.193) 0:00:57.087 ********** 2026-04-10 01:34:15.802714 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-10 01:34:15.802718 | orchestrator | 2026-04-10 01:34:15.802727 | orchestrator | TASK [osism.validations.tempest : Resolve floating network ID] ***************** 2026-04-10 01:34:15.802731 | orchestrator | Friday 10 April 2026 01:34:00 +0000 (0:00:00.398) 0:00:57.486 ********** 2026-04-10 01:34:15.802735 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-10 01:34:15.802739 | orchestrator | 2026-04-10 01:34:15.802743 | orchestrator | TASK [osism.validations.tempest : Assert floating network id has been resolved] *** 2026-04-10 01:34:15.802760 | orchestrator | Friday 10 April 2026 01:34:04 +0000 (0:00:04.014) 0:01:01.500 ********** 2026-04-10 01:34:15.802764 | orchestrator | ok: [testbed-manager -> localhost] => { 2026-04-10 01:34:15.802768 | orchestrator |  "changed": false, 2026-04-10 01:34:15.802771 | orchestrator |  "msg": "All assertions passed" 2026-04-10 01:34:15.802775 | orchestrator | } 2026-04-10 01:34:15.802779 | orchestrator | 2026-04-10 01:34:15.802783 | orchestrator | TASK [osism.validations.tempest : Resolve flavor IDs] ************************** 2026-04-10 01:34:15.802787 | orchestrator | Friday 10 April 2026 01:34:04 +0000 (0:00:00.195) 0:01:01.695 ********** 2026-04-10 01:34:15.802791 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1})  2026-04-10 01:34:15.802796 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2})  2026-04-10 01:34:15.802800 | orchestrator | skipping: [testbed-manager] 2026-04-10 01:34:15.802803 | orchestrator | 2026-04-10 01:34:15.802807 | orchestrator | TASK [osism.validations.tempest : Assert flavors have been resolved] *********** 2026-04-10 01:34:15.802811 | orchestrator | Friday 10 April 2026 01:34:04 +0000 (0:00:00.198) 0:01:01.894 ********** 2026-04-10 01:34:15.802815 | orchestrator | skipping: [testbed-manager] 2026-04-10 01:34:15.802818 | orchestrator | 2026-04-10 01:34:15.802822 | orchestrator | TASK [osism.validations.tempest : Get stats of exclude list] ******************* 2026-04-10 01:34:15.802826 | orchestrator | Friday 10 April 2026 01:34:05 +0000 (0:00:00.169) 0:01:02.064 ********** 2026-04-10 01:34:15.802830 | orchestrator | ok: [testbed-manager] 2026-04-10 01:34:15.802833 | orchestrator | 2026-04-10 01:34:15.802837 | orchestrator | TASK [osism.validations.tempest : Copy exclude list] *************************** 2026-04-10 01:34:15.802841 | orchestrator | Friday 10 April 2026 01:34:05 +0000 (0:00:00.473) 0:01:02.537 ********** 2026-04-10 01:34:15.802845 | orchestrator | changed: [testbed-manager] 2026-04-10 01:34:15.802848 | orchestrator | 2026-04-10 01:34:15.802852 | orchestrator | TASK [osism.validations.tempest : Get stats of include list] ******************* 2026-04-10 01:34:15.802861 | orchestrator | Friday 10 April 2026 01:34:06 +0000 (0:00:00.876) 0:01:03.413 ********** 2026-04-10 01:34:15.802865 | orchestrator | ok: [testbed-manager] 2026-04-10 01:34:15.802869 | orchestrator | 2026-04-10 01:34:15.802873 | orchestrator | TASK [osism.validations.tempest : Copy include list] *************************** 2026-04-10 01:34:15.802876 | orchestrator | Friday 10 April 2026 01:34:06 +0000 (0:00:00.405) 0:01:03.819 ********** 2026-04-10 01:34:15.802880 | orchestrator | skipping: [testbed-manager] 2026-04-10 01:34:15.802884 | orchestrator | 2026-04-10 01:34:15.802888 | orchestrator | TASK [osism.validations.tempest : Create tempest flavors] ********************** 2026-04-10 01:34:15.802891 | orchestrator | Friday 10 April 2026 01:34:07 +0000 (0:00:00.225) 0:01:04.045 ********** 2026-04-10 01:34:15.802895 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1}) 2026-04-10 01:34:15.802899 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2}) 2026-04-10 01:34:15.802903 | orchestrator | 2026-04-10 01:34:15.802907 | orchestrator | TASK [osism.validations.tempest : Copy tempest.conf file] ********************** 2026-04-10 01:34:15.802911 | orchestrator | Friday 10 April 2026 01:34:14 +0000 (0:00:07.680) 0:01:11.726 ********** 2026-04-10 01:34:15.802914 | orchestrator | changed: [testbed-manager] 2026-04-10 01:34:15.802918 | orchestrator | 2026-04-10 01:34:15.802925 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-10 01:34:15.802929 | orchestrator | testbed-manager : ok=24  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-10 01:34:15.802933 | orchestrator | 2026-04-10 01:34:15.802937 | orchestrator | 2026-04-10 01:34:15.802941 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-10 01:34:15.802944 | orchestrator | Friday 10 April 2026 01:34:15 +0000 (0:00:01.024) 0:01:12.751 ********** 2026-04-10 01:34:15.802948 | orchestrator | =============================================================================== 2026-04-10 01:34:15.802952 | orchestrator | osism.validations.tempest : Init tempest ------------------------------- 20.59s 2026-04-10 01:34:15.802956 | orchestrator | osism.validations.tempest : Install qemu-utils package ------------------ 9.00s 2026-04-10 01:34:15.802959 | orchestrator | osism.validations.tempest : Resolve image IDs --------------------------- 8.21s 2026-04-10 01:34:15.802963 | orchestrator | osism.validations.tempest : Create tempest flavors ---------------------- 7.68s 2026-04-10 01:34:15.802969 | orchestrator | osism.validations.tempest : Resolve floating network ID ----------------- 4.01s 2026-04-10 01:34:15.802973 | orchestrator | osism.validations.tempest : Get service catalog ------------------------- 3.83s 2026-04-10 01:34:15.802976 | orchestrator | osism.validations.tempest : Get auth token ------------------------------ 3.68s 2026-04-10 01:34:15.802980 | orchestrator | osism.validations.tempest : Download img_file from image_ref ------------ 2.51s 2026-04-10 01:34:15.802984 | orchestrator | osism.validations.tempest : Get endpoint catalog ------------------------ 1.89s 2026-04-10 01:34:15.802987 | orchestrator | osism.validations.tempest : Get network API extensions ------------------ 1.57s 2026-04-10 01:34:15.802991 | orchestrator | osism.validations.tempest : Revoke token -------------------------------- 1.55s 2026-04-10 01:34:15.802995 | orchestrator | osism.validations.tempest : Copy tempest wrapper script ----------------- 1.22s 2026-04-10 01:34:15.802998 | orchestrator | osism.validations.tempest : Create tempest workdir ---------------------- 1.05s 2026-04-10 01:34:15.803002 | orchestrator | osism.validations.tempest : Copy tempest.conf file ---------------------- 1.02s 2026-04-10 01:34:15.803006 | orchestrator | osism.validations.tempest : Copy exclude list --------------------------- 0.88s 2026-04-10 01:34:15.803010 | orchestrator | osism.validations.tempest : Convert img_file to qcow2 format ------------ 0.69s 2026-04-10 01:34:15.803014 | orchestrator | osism.validations.tempest : Get stats of exclude list ------------------- 0.47s 2026-04-10 01:34:15.803020 | orchestrator | osism.validations.tempest : Check for existing tempest initialisation --- 0.45s 2026-04-10 01:34:16.037774 | orchestrator | osism.validations.tempest : Get stats of include list ------------------- 0.41s 2026-04-10 01:34:16.037829 | orchestrator | osism.validations.tempest : Set fact for config option img_file --------- 0.40s 2026-04-10 01:34:16.243237 | orchestrator | + sed -i '/log_dir =/d' /opt/tempest/etc/tempest.conf 2026-04-10 01:34:16.247936 | orchestrator | + sed -i '/log_file =/d' /opt/tempest/etc/tempest.conf 2026-04-10 01:34:16.252482 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-10 01:34:16.252529 | orchestrator | 2026-04-10 01:34:16.252535 | orchestrator | ## IDENTITY (API) 2026-04-10 01:34:16.252540 | orchestrator | 2026-04-10 01:34:16.252544 | orchestrator | + echo 2026-04-10 01:34:16.252548 | orchestrator | + echo '## IDENTITY (API)' 2026-04-10 01:34:16.252552 | orchestrator | + echo 2026-04-10 01:34:16.252556 | orchestrator | + _tempest tempest.api.identity.v3 2026-04-10 01:34:16.252560 | orchestrator | + local regex=tempest.api.identity.v3 2026-04-10 01:34:16.254278 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16 2026-04-10 01:34:16.254374 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-10 01:34:16.256561 | orchestrator | + tee -a /opt/tempest/20260410-0134.log 2026-04-10 01:34:18.345177 | orchestrator | /usr/local/lib/python3.13/site-packages/oslo_utils/importutils.py:77: EventletDeprecationWarning: 2026-04-10 01:34:18.345286 | orchestrator | Eventlet is deprecated. It is currently being maintained in bugfix mode, and 2026-04-10 01:34:18.345297 | orchestrator | we strongly recommend against using it for new projects. 2026-04-10 01:34:18.345306 | orchestrator | 2026-04-10 01:34:18.345313 | orchestrator | If you are already using Eventlet, we recommend migrating to a different 2026-04-10 01:34:18.345319 | orchestrator | framework. For more detail see 2026-04-10 01:34:18.345327 | orchestrator | https://eventlet.readthedocs.io/en/latest/asyncio/migration.html 2026-04-10 01:34:18.345333 | orchestrator | 2026-04-10 01:34:18.345339 | orchestrator | __import__(import_str) 2026-04-10 01:34:19.820151 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-10 01:34:19.820257 | orchestrator | Did you mean one of these? 2026-04-10 01:34:19.820267 | orchestrator | help 2026-04-10 01:34:19.820272 | orchestrator | init 2026-04-10 01:34:20.199271 | orchestrator | 2026-04-10 01:34:20.199358 | orchestrator | ## IMAGE (API) 2026-04-10 01:34:20.199371 | orchestrator | 2026-04-10 01:34:20.199377 | orchestrator | + echo 2026-04-10 01:34:20.199384 | orchestrator | + echo '## IMAGE (API)' 2026-04-10 01:34:20.199391 | orchestrator | + echo 2026-04-10 01:34:20.199398 | orchestrator | + _tempest tempest.api.image.v2 2026-04-10 01:34:20.199404 | orchestrator | + local regex=tempest.api.image.v2 2026-04-10 01:34:20.200442 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16 2026-04-10 01:34:20.201639 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-10 01:34:20.206752 | orchestrator | + tee -a /opt/tempest/20260410-0134.log 2026-04-10 01:34:22.236996 | orchestrator | /usr/local/lib/python3.13/site-packages/oslo_utils/importutils.py:77: EventletDeprecationWarning: 2026-04-10 01:34:22.237092 | orchestrator | Eventlet is deprecated. It is currently being maintained in bugfix mode, and 2026-04-10 01:34:22.237104 | orchestrator | we strongly recommend against using it for new projects. 2026-04-10 01:34:22.237112 | orchestrator | 2026-04-10 01:34:22.237119 | orchestrator | If you are already using Eventlet, we recommend migrating to a different 2026-04-10 01:34:22.237126 | orchestrator | framework. For more detail see 2026-04-10 01:34:22.237133 | orchestrator | https://eventlet.readthedocs.io/en/latest/asyncio/migration.html 2026-04-10 01:34:22.237140 | orchestrator | 2026-04-10 01:34:22.237146 | orchestrator | __import__(import_str) 2026-04-10 01:34:23.720739 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-10 01:34:23.720832 | orchestrator | Did you mean one of these? 2026-04-10 01:34:23.720842 | orchestrator | help 2026-04-10 01:34:23.720848 | orchestrator | init 2026-04-10 01:34:24.078718 | orchestrator | 2026-04-10 01:34:24.078816 | orchestrator | ## NETWORK (API) 2026-04-10 01:34:24.078828 | orchestrator | 2026-04-10 01:34:24.078834 | orchestrator | + echo 2026-04-10 01:34:24.078842 | orchestrator | + echo '## NETWORK (API)' 2026-04-10 01:34:24.078850 | orchestrator | + echo 2026-04-10 01:34:24.078856 | orchestrator | + _tempest tempest.api.network 2026-04-10 01:34:24.078861 | orchestrator | + local regex=tempest.api.network 2026-04-10 01:34:24.079577 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16 2026-04-10 01:34:24.081803 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-10 01:34:24.083359 | orchestrator | + tee -a /opt/tempest/20260410-0134.log 2026-04-10 01:34:26.109252 | orchestrator | /usr/local/lib/python3.13/site-packages/oslo_utils/importutils.py:77: EventletDeprecationWarning: 2026-04-10 01:34:26.109300 | orchestrator | Eventlet is deprecated. It is currently being maintained in bugfix mode, and 2026-04-10 01:34:26.109306 | orchestrator | we strongly recommend against using it for new projects. 2026-04-10 01:34:26.109312 | orchestrator | 2026-04-10 01:34:26.109317 | orchestrator | If you are already using Eventlet, we recommend migrating to a different 2026-04-10 01:34:26.109334 | orchestrator | framework. For more detail see 2026-04-10 01:34:26.109338 | orchestrator | https://eventlet.readthedocs.io/en/latest/asyncio/migration.html 2026-04-10 01:34:26.109342 | orchestrator | 2026-04-10 01:34:26.109346 | orchestrator | __import__(import_str) 2026-04-10 01:34:27.624152 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-10 01:34:27.624250 | orchestrator | Did you mean one of these? 2026-04-10 01:34:27.624261 | orchestrator | help 2026-04-10 01:34:27.624268 | orchestrator | init 2026-04-10 01:34:27.993126 | orchestrator | 2026-04-10 01:34:27.993195 | orchestrator | ## VOLUME (API) 2026-04-10 01:34:27.993206 | orchestrator | 2026-04-10 01:34:27.993213 | orchestrator | + echo 2026-04-10 01:34:27.993219 | orchestrator | + echo '## VOLUME (API)' 2026-04-10 01:34:27.993235 | orchestrator | + echo 2026-04-10 01:34:27.993242 | orchestrator | + _tempest tempest.api.volume 2026-04-10 01:34:27.993249 | orchestrator | + local regex=tempest.api.volume 2026-04-10 01:34:27.993821 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16 2026-04-10 01:34:27.995029 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-10 01:34:27.999833 | orchestrator | + tee -a /opt/tempest/20260410-0134.log 2026-04-10 01:34:30.030958 | orchestrator | /usr/local/lib/python3.13/site-packages/oslo_utils/importutils.py:77: EventletDeprecationWarning: 2026-04-10 01:34:30.031053 | orchestrator | Eventlet is deprecated. It is currently being maintained in bugfix mode, and 2026-04-10 01:34:30.031063 | orchestrator | we strongly recommend against using it for new projects. 2026-04-10 01:34:30.031071 | orchestrator | 2026-04-10 01:34:30.031078 | orchestrator | If you are already using Eventlet, we recommend migrating to a different 2026-04-10 01:34:30.031085 | orchestrator | framework. For more detail see 2026-04-10 01:34:30.031094 | orchestrator | https://eventlet.readthedocs.io/en/latest/asyncio/migration.html 2026-04-10 01:34:30.031108 | orchestrator | 2026-04-10 01:34:30.031889 | orchestrator | __import__(import_str) 2026-04-10 01:34:31.542630 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-10 01:34:31.542710 | orchestrator | Did you mean one of these? 2026-04-10 01:34:31.542722 | orchestrator | help 2026-04-10 01:34:31.542728 | orchestrator | init 2026-04-10 01:34:31.901160 | orchestrator | 2026-04-10 01:34:31.901260 | orchestrator | ## COMPUTE (API) 2026-04-10 01:34:31.901274 | orchestrator | 2026-04-10 01:34:31.901282 | orchestrator | + echo 2026-04-10 01:34:31.901289 | orchestrator | + echo '## COMPUTE (API)' 2026-04-10 01:34:31.901296 | orchestrator | + echo 2026-04-10 01:34:31.901302 | orchestrator | + _tempest tempest.api.compute 2026-04-10 01:34:31.901309 | orchestrator | + local regex=tempest.api.compute 2026-04-10 01:34:31.902974 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16 2026-04-10 01:34:31.903016 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-10 01:34:31.904027 | orchestrator | + tee -a /opt/tempest/20260410-0134.log 2026-04-10 01:34:33.922230 | orchestrator | /usr/local/lib/python3.13/site-packages/oslo_utils/importutils.py:77: EventletDeprecationWarning: 2026-04-10 01:34:33.922277 | orchestrator | Eventlet is deprecated. It is currently being maintained in bugfix mode, and 2026-04-10 01:34:33.922283 | orchestrator | we strongly recommend against using it for new projects. 2026-04-10 01:34:33.922288 | orchestrator | 2026-04-10 01:34:33.922292 | orchestrator | If you are already using Eventlet, we recommend migrating to a different 2026-04-10 01:34:33.922296 | orchestrator | framework. For more detail see 2026-04-10 01:34:33.922300 | orchestrator | https://eventlet.readthedocs.io/en/latest/asyncio/migration.html 2026-04-10 01:34:33.922311 | orchestrator | 2026-04-10 01:34:33.922320 | orchestrator | __import__(import_str) 2026-04-10 01:34:35.470283 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-10 01:34:35.470362 | orchestrator | Did you mean one of these? 2026-04-10 01:34:35.470374 | orchestrator | help 2026-04-10 01:34:35.470382 | orchestrator | init 2026-04-10 01:34:35.856305 | orchestrator | 2026-04-10 01:34:35.856373 | orchestrator | ## DNS (API) 2026-04-10 01:34:35.856389 | orchestrator | 2026-04-10 01:34:35.856401 | orchestrator | + echo 2026-04-10 01:34:35.856412 | orchestrator | + echo '## DNS (API)' 2026-04-10 01:34:35.856423 | orchestrator | + echo 2026-04-10 01:34:35.856434 | orchestrator | + _tempest designate_tempest_plugin.tests.api.v2 2026-04-10 01:34:35.856445 | orchestrator | + local regex=designate_tempest_plugin.tests.api.v2 2026-04-10 01:34:35.858103 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16 2026-04-10 01:34:35.858162 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-10 01:34:35.859695 | orchestrator | + tee -a /opt/tempest/20260410-0134.log 2026-04-10 01:34:37.902795 | orchestrator | /usr/local/lib/python3.13/site-packages/oslo_utils/importutils.py:77: EventletDeprecationWarning: 2026-04-10 01:34:37.902845 | orchestrator | Eventlet is deprecated. It is currently being maintained in bugfix mode, and 2026-04-10 01:34:37.902854 | orchestrator | we strongly recommend against using it for new projects. 2026-04-10 01:34:37.902861 | orchestrator | 2026-04-10 01:34:37.902869 | orchestrator | If you are already using Eventlet, we recommend migrating to a different 2026-04-10 01:34:37.902875 | orchestrator | framework. For more detail see 2026-04-10 01:34:37.902882 | orchestrator | https://eventlet.readthedocs.io/en/latest/asyncio/migration.html 2026-04-10 01:34:37.902889 | orchestrator | 2026-04-10 01:34:37.902896 | orchestrator | __import__(import_str) 2026-04-10 01:34:39.382805 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-10 01:34:39.382906 | orchestrator | Did you mean one of these? 2026-04-10 01:34:39.382917 | orchestrator | help 2026-04-10 01:34:39.382924 | orchestrator | init 2026-04-10 01:34:39.743978 | orchestrator | 2026-04-10 01:34:39.744100 | orchestrator | ## OBJECT-STORE (API) 2026-04-10 01:34:39.744113 | orchestrator | 2026-04-10 01:34:39.744120 | orchestrator | + echo 2026-04-10 01:34:39.744126 | orchestrator | + echo '## OBJECT-STORE (API)' 2026-04-10 01:34:39.744133 | orchestrator | + echo 2026-04-10 01:34:39.744150 | orchestrator | + _tempest tempest.api.object_storage 2026-04-10 01:34:39.744158 | orchestrator | + local regex=tempest.api.object_storage 2026-04-10 01:34:39.745209 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16 2026-04-10 01:34:39.746170 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-10 01:34:39.747576 | orchestrator | + tee -a /opt/tempest/20260410-0134.log 2026-04-10 01:34:41.777086 | orchestrator | /usr/local/lib/python3.13/site-packages/oslo_utils/importutils.py:77: EventletDeprecationWarning: 2026-04-10 01:34:41.777168 | orchestrator | Eventlet is deprecated. It is currently being maintained in bugfix mode, and 2026-04-10 01:34:41.777179 | orchestrator | we strongly recommend against using it for new projects. 2026-04-10 01:34:41.777186 | orchestrator | 2026-04-10 01:34:41.777193 | orchestrator | If you are already using Eventlet, we recommend migrating to a different 2026-04-10 01:34:41.777199 | orchestrator | framework. For more detail see 2026-04-10 01:34:41.777205 | orchestrator | https://eventlet.readthedocs.io/en/latest/asyncio/migration.html 2026-04-10 01:34:41.777211 | orchestrator | 2026-04-10 01:34:41.777217 | orchestrator | __import__(import_str) 2026-04-10 01:34:43.292550 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-10 01:34:43.292689 | orchestrator | Did you mean one of these? 2026-04-10 01:34:43.292733 | orchestrator | help 2026-04-10 01:34:43.292742 | orchestrator | init 2026-04-10 01:34:43.872519 | orchestrator | ok: Runtime: 0:01:55.565557 2026-04-10 01:34:43.897503 | 2026-04-10 01:34:43.897680 | TASK [Check prometheus alert status] 2026-04-10 01:34:44.434919 | orchestrator | skipping: Conditional result was False 2026-04-10 01:34:44.438193 | 2026-04-10 01:34:44.438375 | PLAY RECAP 2026-04-10 01:34:44.438550 | orchestrator | ok: 25 changed: 12 unreachable: 0 failed: 0 skipped: 4 rescued: 0 ignored: 0 2026-04-10 01:34:44.438624 | 2026-04-10 01:34:44.664898 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-04-10 01:34:44.667927 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-10 01:34:45.437735 | 2026-04-10 01:34:45.437908 | PLAY [Post output play] 2026-04-10 01:34:45.454274 | 2026-04-10 01:34:45.454425 | LOOP [stage-output : Register sources] 2026-04-10 01:34:45.523698 | 2026-04-10 01:34:45.524024 | TASK [stage-output : Check sudo] 2026-04-10 01:34:46.285062 | orchestrator | sudo: a password is required 2026-04-10 01:34:46.561538 | orchestrator | ok: Runtime: 0:00:00.008541 2026-04-10 01:34:46.577144 | 2026-04-10 01:34:46.577375 | LOOP [stage-output : Set source and destination for files and folders] 2026-04-10 01:34:46.616630 | 2026-04-10 01:34:46.616975 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-04-10 01:34:46.686996 | orchestrator | ok 2026-04-10 01:34:46.695287 | 2026-04-10 01:34:46.695416 | LOOP [stage-output : Ensure target folders exist] 2026-04-10 01:34:47.136148 | orchestrator | ok: "docs" 2026-04-10 01:34:47.136490 | 2026-04-10 01:34:47.357707 | orchestrator | ok: "artifacts" 2026-04-10 01:34:47.570689 | orchestrator | ok: "logs" 2026-04-10 01:34:47.597586 | 2026-04-10 01:34:47.597781 | LOOP [stage-output : Copy files and folders to staging folder] 2026-04-10 01:34:47.639113 | 2026-04-10 01:34:47.639423 | TASK [stage-output : Make all log files readable] 2026-04-10 01:34:47.930061 | orchestrator | ok 2026-04-10 01:34:47.938665 | 2026-04-10 01:34:47.938807 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-04-10 01:34:47.973515 | orchestrator | skipping: Conditional result was False 2026-04-10 01:34:47.990763 | 2026-04-10 01:34:47.990958 | TASK [stage-output : Discover log files for compression] 2026-04-10 01:34:48.015676 | orchestrator | skipping: Conditional result was False 2026-04-10 01:34:48.030389 | 2026-04-10 01:34:48.030596 | LOOP [stage-output : Archive everything from logs] 2026-04-10 01:34:48.067854 | 2026-04-10 01:34:48.068010 | PLAY [Post cleanup play] 2026-04-10 01:34:48.076181 | 2026-04-10 01:34:48.076318 | TASK [Set cloud fact (Zuul deployment)] 2026-04-10 01:34:48.144172 | orchestrator | ok 2026-04-10 01:34:48.156165 | 2026-04-10 01:34:48.156314 | TASK [Set cloud fact (local deployment)] 2026-04-10 01:34:48.190979 | orchestrator | skipping: Conditional result was False 2026-04-10 01:34:48.210081 | 2026-04-10 01:34:48.210255 | TASK [Clean the cloud environment] 2026-04-10 01:34:48.721561 | orchestrator | 2026-04-10 01:34:48 - clean up servers 2026-04-10 01:34:49.463242 | orchestrator | 2026-04-10 01:34:49 - testbed-manager 2026-04-10 01:34:49.544992 | orchestrator | 2026-04-10 01:34:49 - testbed-node-2 2026-04-10 01:34:49.638005 | orchestrator | 2026-04-10 01:34:49 - testbed-node-5 2026-04-10 01:34:49.719833 | orchestrator | 2026-04-10 01:34:49 - testbed-node-3 2026-04-10 01:34:49.811741 | orchestrator | 2026-04-10 01:34:49 - testbed-node-4 2026-04-10 01:34:49.904142 | orchestrator | 2026-04-10 01:34:49 - testbed-node-1 2026-04-10 01:34:49.990133 | orchestrator | 2026-04-10 01:34:49 - testbed-node-0 2026-04-10 01:34:50.070798 | orchestrator | 2026-04-10 01:34:50 - clean up keypairs 2026-04-10 01:34:50.088003 | orchestrator | 2026-04-10 01:34:50 - testbed 2026-04-10 01:34:50.112477 | orchestrator | 2026-04-10 01:34:50 - wait for servers to be gone 2026-04-10 01:35:05.308505 | orchestrator | 2026-04-10 01:35:05 - clean up ports 2026-04-10 01:35:05.544773 | orchestrator | 2026-04-10 01:35:05 - 4cd8dcb4-31a2-44b2-9449-8d04daef346a 2026-04-10 01:35:05.789489 | orchestrator | 2026-04-10 01:35:05 - 9b07f55f-647f-4a80-b032-92dfb014d615 2026-04-10 01:35:06.200798 | orchestrator | 2026-04-10 01:35:06 - a2d9da17-ec3b-4b17-b6cf-e2aae1136b91 2026-04-10 01:35:06.468032 | orchestrator | 2026-04-10 01:35:06 - b44d1548-eb64-4b66-b755-658c7646f7f4 2026-04-10 01:35:06.702359 | orchestrator | 2026-04-10 01:35:06 - bbacfe98-cd16-40f8-84ce-4cb4edce5b27 2026-04-10 01:35:06.919377 | orchestrator | 2026-04-10 01:35:06 - d6aecf41-df5a-44fd-a838-e2d00c1591d3 2026-04-10 01:35:07.156436 | orchestrator | 2026-04-10 01:35:07 - fa58d1b6-8646-4157-ad52-bb297975fb0f 2026-04-10 01:35:07.402147 | orchestrator | 2026-04-10 01:35:07 - clean up volumes 2026-04-10 01:35:07.527979 | orchestrator | 2026-04-10 01:35:07 - testbed-volume-3-node-base 2026-04-10 01:35:07.569332 | orchestrator | 2026-04-10 01:35:07 - testbed-volume-4-node-base 2026-04-10 01:35:07.607800 | orchestrator | 2026-04-10 01:35:07 - testbed-volume-5-node-base 2026-04-10 01:35:07.649336 | orchestrator | 2026-04-10 01:35:07 - testbed-volume-0-node-base 2026-04-10 01:35:07.690666 | orchestrator | 2026-04-10 01:35:07 - testbed-volume-1-node-base 2026-04-10 01:35:07.726880 | orchestrator | 2026-04-10 01:35:07 - testbed-volume-2-node-base 2026-04-10 01:35:07.772707 | orchestrator | 2026-04-10 01:35:07 - testbed-volume-manager-base 2026-04-10 01:35:07.816920 | orchestrator | 2026-04-10 01:35:07 - testbed-volume-8-node-5 2026-04-10 01:35:07.860400 | orchestrator | 2026-04-10 01:35:07 - testbed-volume-0-node-3 2026-04-10 01:35:07.904984 | orchestrator | 2026-04-10 01:35:07 - testbed-volume-1-node-4 2026-04-10 01:35:07.948272 | orchestrator | 2026-04-10 01:35:07 - testbed-volume-5-node-5 2026-04-10 01:35:07.990701 | orchestrator | 2026-04-10 01:35:07 - testbed-volume-6-node-3 2026-04-10 01:35:08.032491 | orchestrator | 2026-04-10 01:35:08 - testbed-volume-3-node-3 2026-04-10 01:35:08.072221 | orchestrator | 2026-04-10 01:35:08 - testbed-volume-2-node-5 2026-04-10 01:35:08.118271 | orchestrator | 2026-04-10 01:35:08 - testbed-volume-7-node-4 2026-04-10 01:35:08.166444 | orchestrator | 2026-04-10 01:35:08 - testbed-volume-4-node-4 2026-04-10 01:35:08.210098 | orchestrator | 2026-04-10 01:35:08 - disconnect routers 2026-04-10 01:35:08.337874 | orchestrator | 2026-04-10 01:35:08 - testbed 2026-04-10 01:35:09.471004 | orchestrator | 2026-04-10 01:35:09 - clean up subnets 2026-04-10 01:35:09.541186 | orchestrator | 2026-04-10 01:35:09 - subnet-testbed-management 2026-04-10 01:35:09.736048 | orchestrator | 2026-04-10 01:35:09 - clean up networks 2026-04-10 01:35:09.918784 | orchestrator | 2026-04-10 01:35:09 - net-testbed-management 2026-04-10 01:35:10.263552 | orchestrator | 2026-04-10 01:35:10 - clean up security groups 2026-04-10 01:35:10.319645 | orchestrator | 2026-04-10 01:35:10 - testbed-management 2026-04-10 01:35:10.454768 | orchestrator | 2026-04-10 01:35:10 - testbed-node 2026-04-10 01:35:10.598423 | orchestrator | 2026-04-10 01:35:10 - clean up floating ips 2026-04-10 01:35:10.635055 | orchestrator | 2026-04-10 01:35:10 - 81.163.192.34 2026-04-10 01:35:10.995152 | orchestrator | 2026-04-10 01:35:10 - clean up routers 2026-04-10 01:35:11.120688 | orchestrator | 2026-04-10 01:35:11 - testbed 2026-04-10 01:35:12.773855 | orchestrator | ok: Runtime: 0:00:24.184010 2026-04-10 01:35:12.778472 | 2026-04-10 01:35:12.778668 | PLAY RECAP 2026-04-10 01:35:12.778793 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-04-10 01:35:12.778893 | 2026-04-10 01:35:12.913269 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-10 01:35:12.914367 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-10 01:35:13.634734 | 2026-04-10 01:35:13.634931 | PLAY [Cleanup play] 2026-04-10 01:35:13.651224 | 2026-04-10 01:35:13.651367 | TASK [Set cloud fact (Zuul deployment)] 2026-04-10 01:35:13.705227 | orchestrator | ok 2026-04-10 01:35:13.713890 | 2026-04-10 01:35:13.714033 | TASK [Set cloud fact (local deployment)] 2026-04-10 01:35:13.748449 | orchestrator | skipping: Conditional result was False 2026-04-10 01:35:13.765703 | 2026-04-10 01:35:13.765853 | TASK [Clean the cloud environment] 2026-04-10 01:35:14.959147 | orchestrator | 2026-04-10 01:35:14 - clean up servers 2026-04-10 01:35:15.587426 | orchestrator | 2026-04-10 01:35:15 - clean up keypairs 2026-04-10 01:35:15.605873 | orchestrator | 2026-04-10 01:35:15 - wait for servers to be gone 2026-04-10 01:35:15.648990 | orchestrator | 2026-04-10 01:35:15 - clean up ports 2026-04-10 01:35:15.735739 | orchestrator | 2026-04-10 01:35:15 - clean up volumes 2026-04-10 01:35:15.819963 | orchestrator | 2026-04-10 01:35:15 - disconnect routers 2026-04-10 01:35:15.849892 | orchestrator | 2026-04-10 01:35:15 - clean up subnets 2026-04-10 01:35:15.881461 | orchestrator | 2026-04-10 01:35:15 - clean up networks 2026-04-10 01:35:16.050351 | orchestrator | 2026-04-10 01:35:16 - clean up security groups 2026-04-10 01:35:16.091338 | orchestrator | 2026-04-10 01:35:16 - clean up floating ips 2026-04-10 01:35:16.132444 | orchestrator | 2026-04-10 01:35:16 - clean up routers 2026-04-10 01:35:16.306978 | orchestrator | ok: Runtime: 0:00:01.618653 2026-04-10 01:35:16.309729 | 2026-04-10 01:35:16.309847 | PLAY RECAP 2026-04-10 01:35:16.309935 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-04-10 01:35:16.309980 | 2026-04-10 01:35:16.439612 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-10 01:35:16.442711 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-10 01:35:17.218733 | 2026-04-10 01:35:17.218941 | PLAY [Base post-fetch] 2026-04-10 01:35:17.234471 | 2026-04-10 01:35:17.234644 | TASK [fetch-output : Set log path for multiple nodes] 2026-04-10 01:35:17.289846 | orchestrator | skipping: Conditional result was False 2026-04-10 01:35:17.296928 | 2026-04-10 01:35:17.297068 | TASK [fetch-output : Set log path for single node] 2026-04-10 01:35:17.327389 | orchestrator | ok 2026-04-10 01:35:17.333533 | 2026-04-10 01:35:17.333645 | LOOP [fetch-output : Ensure local output dirs] 2026-04-10 01:35:17.815694 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/3fbdc7eebc9a432fbfedb79498829f7e/work/logs" 2026-04-10 01:35:18.096907 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/3fbdc7eebc9a432fbfedb79498829f7e/work/artifacts" 2026-04-10 01:35:18.376936 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/3fbdc7eebc9a432fbfedb79498829f7e/work/docs" 2026-04-10 01:35:18.400524 | 2026-04-10 01:35:18.400687 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-04-10 01:35:19.340780 | orchestrator | changed: .d..t...... ./ 2026-04-10 01:35:19.341090 | orchestrator | changed: All items complete 2026-04-10 01:35:19.341143 | 2026-04-10 01:35:20.039168 | orchestrator | changed: .d..t...... ./ 2026-04-10 01:35:20.791906 | orchestrator | changed: .d..t...... ./ 2026-04-10 01:35:20.824059 | 2026-04-10 01:35:20.824235 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-04-10 01:35:20.864431 | orchestrator | skipping: Conditional result was False 2026-04-10 01:35:20.866446 | orchestrator | skipping: Conditional result was False 2026-04-10 01:35:20.886221 | 2026-04-10 01:35:20.886346 | PLAY RECAP 2026-04-10 01:35:20.886424 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-04-10 01:35:20.886594 | 2026-04-10 01:35:21.015184 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-10 01:35:21.017645 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-10 01:35:21.800581 | 2026-04-10 01:35:21.800754 | PLAY [Base post] 2026-04-10 01:35:21.815810 | 2026-04-10 01:35:21.815962 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-04-10 01:35:22.836533 | orchestrator | changed 2026-04-10 01:35:22.847180 | 2026-04-10 01:35:22.847312 | PLAY RECAP 2026-04-10 01:35:22.847382 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-10 01:35:22.847452 | 2026-04-10 01:35:22.980590 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-10 01:35:22.984433 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-04-10 01:35:23.786929 | 2026-04-10 01:35:23.787099 | PLAY [Base post-logs] 2026-04-10 01:35:23.797805 | 2026-04-10 01:35:23.797945 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-04-10 01:35:24.310307 | localhost | changed 2026-04-10 01:35:24.320992 | 2026-04-10 01:35:24.321132 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-04-10 01:35:24.343335 | localhost | ok 2026-04-10 01:35:24.346424 | 2026-04-10 01:35:24.346540 | TASK [Set zuul-log-path fact] 2026-04-10 01:35:24.361199 | localhost | ok 2026-04-10 01:35:24.368990 | 2026-04-10 01:35:24.369094 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-10 01:35:24.393741 | localhost | ok 2026-04-10 01:35:24.396792 | 2026-04-10 01:35:24.396890 | TASK [upload-logs : Create log directories] 2026-04-10 01:35:24.871912 | localhost | changed 2026-04-10 01:35:24.875723 | 2026-04-10 01:35:24.875871 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-04-10 01:35:25.388754 | localhost -> localhost | ok: Runtime: 0:00:00.006739 2026-04-10 01:35:25.398443 | 2026-04-10 01:35:25.398643 | TASK [upload-logs : Upload logs to log server] 2026-04-10 01:35:25.952543 | localhost | Output suppressed because no_log was given 2026-04-10 01:35:25.955116 | 2026-04-10 01:35:25.955243 | LOOP [upload-logs : Compress console log and json output] 2026-04-10 01:35:26.007780 | localhost | skipping: Conditional result was False 2026-04-10 01:35:26.013353 | localhost | skipping: Conditional result was False 2026-04-10 01:35:26.024691 | 2026-04-10 01:35:26.024847 | LOOP [upload-logs : Upload compressed console log and json output] 2026-04-10 01:35:26.071177 | localhost | skipping: Conditional result was False 2026-04-10 01:35:26.071479 | 2026-04-10 01:35:26.078519 | localhost | skipping: Conditional result was False 2026-04-10 01:35:26.086331 | 2026-04-10 01:35:26.086575 | LOOP [upload-logs : Upload console log and json output]